Test Report: Docker_macOS 14956

                    
                      5dfb368b1e05cc59a5a3533b7608973265a62e27:2022-10-25:26275
                    
                

Test fail (75/246)

Order failed test Duration
34 TestCertOptions 40.68
35 TestCertExpiration 282.47
36 TestDockerFlags 40.73
37 TestForceSystemdFlag 40.48
38 TestForceSystemdEnv 40.62
137 TestIngressAddonLegacy/StartLegacyK8sCluster 255.12
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 89.61
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 89.52
141 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.48
199 TestMultiNode/serial/RestartMultiNode 185.6
210 TestRunningBinaryUpgrade 1868.72
212 TestKubernetesUpgrade 55.41
213 TestMissingContainerUpgrade 202.44
226 TestStoppedBinaryUpgrade/Upgrade 1568.92
227 TestStoppedBinaryUpgrade/MinikubeLogs 0.48
236 TestPause/serial/Start 39.39
239 TestNoKubernetes/serial/StartWithK8s 39.4
240 TestNoKubernetes/serial/StartWithStopK8s 61.8
241 TestNoKubernetes/serial/Start 61.81
244 TestNoKubernetes/serial/Stop 14.84
245 TestNoKubernetes/serial/StartNoArgs 61.16
249 TestNetworkPlugins/group/auto/Start 39.14
250 TestNetworkPlugins/group/kindnet/Start 39.28
251 TestNetworkPlugins/group/cilium/Start 40.26
252 TestNetworkPlugins/group/calico/Start 39.47
253 TestNetworkPlugins/group/false/Start 39.44
254 TestNetworkPlugins/group/bridge/Start 39.3
255 TestNetworkPlugins/group/enable-default-cni/Start 39.29
256 TestNetworkPlugins/group/kubenet/Start 4.86
258 TestStartStop/group/old-k8s-version/serial/FirstStart 39.86
260 TestStartStop/group/no-preload/serial/FirstStart 39.41
261 TestStartStop/group/old-k8s-version/serial/DeployApp 0.39
262 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.44
263 TestStartStop/group/old-k8s-version/serial/Stop 15
264 TestStartStop/group/no-preload/serial/DeployApp 0.39
265 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.44
266 TestStartStop/group/no-preload/serial/Stop 14.89
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.54
268 TestStartStop/group/old-k8s-version/serial/SecondStart 62.12
269 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.7
270 TestStartStop/group/no-preload/serial/SecondStart 61.45
271 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.18
272 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.28
273 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.4
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.18
275 TestStartStop/group/old-k8s-version/serial/Pause 0.62
276 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.22
277 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
278 TestStartStop/group/no-preload/serial/Pause 0.63
280 TestStartStop/group/embed-certs/serial/FirstStart 39.86
282 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.72
283 TestStartStop/group/embed-certs/serial/DeployApp 0.39
284 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.44
285 TestStartStop/group/embed-certs/serial/Stop 15
286 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.39
287 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.5
288 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.89
289 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.54
290 TestStartStop/group/embed-certs/serial/SecondStart 62.23
291 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.7
292 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 61.99
293 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.18
294 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.21
295 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.54
296 TestStartStop/group/embed-certs/serial/Pause 0.58
297 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.18
298 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.22
299 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.4
300 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.71
302 TestStartStop/group/newest-cni/serial/FirstStart 39.63
305 TestStartStop/group/newest-cni/serial/Stop 14.86
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.56
307 TestStartStop/group/newest-cni/serial/SecondStart 61.84
310 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
311 TestStartStop/group/newest-cni/serial/Pause 0.56
x
+
TestCertOptions (40.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-212746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-212746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: exit status 80 (39.19992241s)

                                                
                                                
-- stdout --
	* [cert-options-212746] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cert-options-212746 in cluster cert-options-212746
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-212746" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for cert-options-212746 container: docker run --rm --name cert-options-212746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-212746 --entrypoint /usr/bin/test -v cert-options-212746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p cert-options-212746" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for cert-options-212746 container: docker run --rm --name cert-options-212746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-212746 --entrypoint /usr/bin/test -v cert-options-212746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for cert-options-212746 container: docker run --rm --name cert-options-212746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-212746 --entrypoint /usr/bin/test -v cert-options-212746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-212746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost" : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-212746 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-212746 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (204.323507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-212746": docker container inspect cert-options-212746 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-212746
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_c1f8366d59c5f8f6460a712ebd6036fcc73bcb99_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-212746 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:82: failed to inspect container for the port get port 8555 for "cert-options-212746": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-212746: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-212746
cert_options_test.go:85: expected to get a non-zero forwarded port but got 0
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-212746 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-212746 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (205.531348ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-212746": docker container inspect cert-options-212746 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-212746
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_e59a677a82728474bde049b1a4510f5e357f9593_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-212746 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-212746": docker container inspect cert-options-212746 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-212746
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_e59a677a82728474bde049b1a4510f5e357f9593_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2022-10-25 21:28:26.299356 -0700 PDT m=+4263.916601943
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-212746
helpers_test.go:235: (dbg) docker inspect cert-options-212746:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-options-212746",
	        "Id": "215f00c91947ad128962415af3c656bea4ecb944c8fb1f17baf5dabc41fc3c42",
	        "Created": "2022-10-26T04:28:17.203994476Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "cert-options-212746"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-212746 -n cert-options-212746
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-212746 -n cert-options-212746: exit status 7 (113.060027ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:28:26.476250   17334 status.go:249] status error: host: state: unknown state "cert-options-212746": docker container inspect cert-options-212746 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-212746

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-212746" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-212746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-212746
--- FAIL: TestCertOptions (40.68s)

                                                
                                    
x
+
TestCertExpiration (282.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-212703 --memory=2048 --cert-expiration=3m --driver=docker 
E1025 21:27:04.148560    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-212703 --memory=2048 --cert-expiration=3m --driver=docker : exit status 80 (39.537984431s)

                                                
                                                
-- stdout --
	* [cert-expiration-212703] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cert-expiration-212703 in cluster cert-expiration-212703
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-212703" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-212703" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-212703 --memory=2048 --cert-expiration=3m --driver=docker " : exit status 80

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-212703 --memory=2048 --cert-expiration=8760h --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-212703 --memory=2048 --cert-expiration=8760h --driver=docker : exit status 80 (1m1.913134716s)

                                                
                                                
-- stdout --
	* [cert-expiration-212703] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-212703 in cluster cert-expiration-212703
	* Pulling base image ...
	* docker "cert-expiration-212703" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-212703" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-212703" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-212703 --memory=2048 --cert-expiration=8760h --driver=docker " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-212703] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-212703 in cluster cert-expiration-212703
	* Pulling base image ...
	* docker "cert-expiration-212703" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-212703" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-212703" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for cert-expiration-212703 container: docker run --rm --name cert-expiration-212703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-212703 --entrypoint /usr/bin/test -v cert-expiration-212703:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2022-10-25 21:31:44.578689 -0700 PDT m=+4462.150942628
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-212703
helpers_test.go:235: (dbg) docker inspect cert-expiration-212703:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-expiration-212703",
	        "Id": "b8cd41684f0a75ca76b94031c52e5a1d848d329288f2b67288db92606e1709a0",
	        "Created": "2022-10-26T04:31:35.603639059Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "cert-expiration-212703"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-212703 -n cert-expiration-212703
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-212703 -n cert-expiration-212703: exit status 7 (112.028478ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:31:44.756362   18568 status.go:249] status error: host: state: unknown state "cert-expiration-212703": docker container inspect cert-expiration-212703 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-212703

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-212703" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-212703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-212703
--- FAIL: TestCertExpiration (282.47s)

                                                
                                    
x
+
TestDockerFlags (40.73s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-212705 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-212705 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 80 (39.310574088s)

                                                
                                                
-- stdout --
	* [docker-flags-212705] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-212705 in cluster docker-flags-212705
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-212705" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:27:05.944686   16818 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:27:05.944847   16818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:27:05.944852   16818 out.go:309] Setting ErrFile to fd 2...
	I1025 21:27:05.944856   16818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:27:05.944962   16818 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:27:05.945463   16818 out.go:303] Setting JSON to false
	I1025 21:27:05.960104   16818 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5194,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:27:05.960220   16818 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:27:05.982308   16818 out.go:177] * [docker-flags-212705] minikube v1.27.1 on Darwin 12.6
	I1025 21:27:06.003573   16818 notify.go:220] Checking for updates...
	I1025 21:27:06.025524   16818 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:27:06.047471   16818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:27:06.069540   16818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:27:06.091425   16818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:27:06.113678   16818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:27:06.136305   16818 config.go:180] Loaded profile config "cert-expiration-212703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:27:06.136514   16818 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:27:06.136620   16818 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:27:06.202351   16818 docker.go:137] docker version: linux-20.10.17
	I1025 21:27:06.202476   16818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:27:06.330950   16818 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:27:06.280379493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:27:06.374651   16818 out.go:177] * Using the docker driver based on user configuration
	I1025 21:27:06.396512   16818 start.go:282] selected driver: docker
	I1025 21:27:06.396550   16818 start.go:808] validating driver "docker" against <nil>
	I1025 21:27:06.396574   16818 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:27:06.399921   16818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:27:06.529100   16818 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:27:06.478529282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:27:06.529247   16818 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:27:06.529378   16818 start_flags.go:883] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1025 21:27:06.550326   16818 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:27:06.571458   16818 cni.go:95] Creating CNI manager for ""
	I1025 21:27:06.571487   16818 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:27:06.571503   16818 start_flags.go:317] config:
	{Name:docker-flags-212705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:docker-flags-212705 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet
}
	I1025 21:27:06.593329   16818 out.go:177] * Starting control plane node docker-flags-212705 in cluster docker-flags-212705
	I1025 21:27:06.614470   16818 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:27:06.636353   16818 out.go:177] * Pulling base image ...
	I1025 21:27:06.679342   16818 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:27:06.679353   16818 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:27:06.679414   16818 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:27:06.679434   16818 cache.go:57] Caching tarball of preloaded images
	I1025 21:27:06.679626   16818 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:27:06.679651   16818 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:27:06.680622   16818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/docker-flags-212705/config.json ...
	I1025 21:27:06.680738   16818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/docker-flags-212705/config.json: {Name:mk9d6b01b68842bf86bdf46dfb7d45ae718759f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:27:06.742520   16818 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:27:06.742547   16818 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:27:06.742557   16818 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:27:06.742608   16818 start.go:364] acquiring machines lock for docker-flags-212705: {Name:mke91fb7b252fb3e63899aec2227f46070a14971 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:27:06.742768   16818 start.go:368] acquired machines lock for "docker-flags-212705" in 146.181µs
	I1025 21:27:06.742793   16818 start.go:93] Provisioning new machine with config: &{Name:docker-flags-212705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:docker-flags-212705 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:27:06.742853   16818 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:27:06.764907   16818 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:27:06.765318   16818 start.go:159] libmachine.API.Create for "docker-flags-212705" (driver="docker")
	I1025 21:27:06.765359   16818 client.go:168] LocalClient.Create starting
	I1025 21:27:06.765505   16818 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:27:06.765569   16818 main.go:134] libmachine: Decoding PEM data...
	I1025 21:27:06.765598   16818 main.go:134] libmachine: Parsing certificate...
	I1025 21:27:06.765700   16818 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:27:06.765748   16818 main.go:134] libmachine: Decoding PEM data...
	I1025 21:27:06.765771   16818 main.go:134] libmachine: Parsing certificate...
	I1025 21:27:06.766624   16818 cli_runner.go:164] Run: docker network inspect docker-flags-212705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:27:06.827822   16818 cli_runner.go:211] docker network inspect docker-flags-212705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:27:06.827909   16818 network_create.go:272] running [docker network inspect docker-flags-212705] to gather additional debugging logs...
	I1025 21:27:06.827928   16818 cli_runner.go:164] Run: docker network inspect docker-flags-212705
	W1025 21:27:06.890388   16818 cli_runner.go:211] docker network inspect docker-flags-212705 returned with exit code 1
	I1025 21:27:06.890410   16818 network_create.go:275] error running [docker network inspect docker-flags-212705]: docker network inspect docker-flags-212705: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-212705
	I1025 21:27:06.890420   16818 network_create.go:277] output of [docker network inspect docker-flags-212705]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-212705
	
	** /stderr **
	I1025 21:27:06.890533   16818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:27:06.952369   16818 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8] misses:0}
	I1025 21:27:06.952407   16818 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:06.952420   16818 network_create.go:115] attempt to create docker network docker-flags-212705 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:27:06.952494   16818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705
	W1025 21:27:07.013273   16818 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705 returned with exit code 1
	W1025 21:27:07.013322   16818 network_create.go:107] failed to create docker network docker-flags-212705 192.168.49.0/24, will retry: subnet is taken
	I1025 21:27:07.013583   16818 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:false}} dirty:map[] misses:0}
	I1025 21:27:07.013599   16818 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:07.013833   16818 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640] misses:0}
	I1025 21:27:07.013850   16818 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:07.013860   16818 network_create.go:115] attempt to create docker network docker-flags-212705 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:27:07.013913   16818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705
	W1025 21:27:07.075722   16818 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705 returned with exit code 1
	W1025 21:27:07.075762   16818 network_create.go:107] failed to create docker network docker-flags-212705 192.168.58.0/24, will retry: subnet is taken
	I1025 21:27:07.076029   16818 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640] misses:1}
	I1025 21:27:07.076047   16818 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:07.076256   16818 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380] misses:1}
	I1025 21:27:07.076268   16818 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:07.076281   16818 network_create.go:115] attempt to create docker network docker-flags-212705 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:27:07.076340   16818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705
	W1025 21:27:07.136712   16818 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705 returned with exit code 1
	W1025 21:27:07.136761   16818 network_create.go:107] failed to create docker network docker-flags-212705 192.168.67.0/24, will retry: subnet is taken
	I1025 21:27:07.136997   16818 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380] misses:2}
	I1025 21:27:07.137016   16818 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:07.137244   16818 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] misses:2}
	I1025 21:27:07.137256   16818 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:07.137263   16818 network_create.go:115] attempt to create docker network docker-flags-212705 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:27:07.137321   16818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705
	I1025 21:27:07.227138   16818 network_create.go:99] docker network docker-flags-212705 192.168.76.0/24 created
	I1025 21:27:07.227177   16818 kic.go:106] calculated static IP "192.168.76.2" for the "docker-flags-212705" container
	I1025 21:27:07.227274   16818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:27:07.288987   16818 cli_runner.go:164] Run: docker volume create docker-flags-212705 --label name.minikube.sigs.k8s.io=docker-flags-212705 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:27:07.351076   16818 oci.go:103] Successfully created a docker volume docker-flags-212705
	I1025 21:27:07.351166   16818 cli_runner.go:164] Run: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:27:07.566413   16818 cli_runner.go:211] docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:27:07.566477   16818 client.go:171] LocalClient.Create took 801.10571ms
	I1025 21:27:09.568705   16818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:27:09.568919   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:09.635645   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:09.635752   16818 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:09.912604   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:09.976430   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:09.976555   16818 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:10.518525   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:10.581533   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:10.581615   16818 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:11.237647   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:11.301267   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	W1025 21:27:11.301415   16818 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	W1025 21:27:11.301442   16818 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:11.301493   16818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:27:11.301533   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:11.361867   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:11.361952   16818 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:11.593900   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:11.659530   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:11.659648   16818 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:12.107114   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:12.171699   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:12.171780   16818 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:12.492294   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:12.557328   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:12.557407   16818 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:13.111938   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:13.175036   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	W1025 21:27:13.175126   16818 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	W1025 21:27:13.175156   16818 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:13.175177   16818 start.go:128] duration metric: createHost completed in 6.432304299s
	I1025 21:27:13.175184   16818 start.go:83] releasing machines lock for "docker-flags-212705", held for 6.432394081s
	W1025 21:27:13.175199   16818 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:27:13.175600   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:13.235252   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:13.235312   16818 delete.go:82] Unable to get host status for docker-flags-212705, assuming it has already been deleted: state: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	W1025 21:27:13.235470   16818 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:27:13.235483   16818 start.go:618] Will try again in 5 seconds ...
	I1025 21:27:18.237811   16818 start.go:364] acquiring machines lock for docker-flags-212705: {Name:mke91fb7b252fb3e63899aec2227f46070a14971 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:27:18.237956   16818 start.go:368] acquired machines lock for "docker-flags-212705" in 107.066µs
	I1025 21:27:18.237985   16818 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:27:18.238000   16818 fix.go:55] fixHost starting: 
	I1025 21:27:18.238361   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:18.301963   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:18.302007   16818 fix.go:103] recreateIfNeeded on docker-flags-212705: state= err=unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:18.302044   16818 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:27:18.323783   16818 out.go:177] * docker "docker-flags-212705" container is missing, will recreate.
	I1025 21:27:18.368652   16818 delete.go:124] DEMOLISHING docker-flags-212705 ...
	I1025 21:27:18.368873   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:18.430090   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	W1025 21:27:18.430133   16818 stop.go:75] unable to get state: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:18.430145   16818 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:18.430520   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:18.491434   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:18.491495   16818 delete.go:82] Unable to get host status for docker-flags-212705, assuming it has already been deleted: state: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:18.491564   16818 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-212705
	W1025 21:27:18.553900   16818 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-212705 returned with exit code 1
	I1025 21:27:18.553927   16818 kic.go:356] could not find the container docker-flags-212705 to remove it. will try anyways
	I1025 21:27:18.554010   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:18.613551   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	W1025 21:27:18.613589   16818 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:18.613655   16818 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-212705 /bin/bash -c "sudo init 0"
	W1025 21:27:18.673550   16818 cli_runner.go:211] docker exec --privileged -t docker-flags-212705 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:27:18.673573   16818 oci.go:646] error shutdown docker-flags-212705: docker exec --privileged -t docker-flags-212705 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:19.673892   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:19.740484   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:19.740539   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:19.740550   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:19.740568   16818 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:20.143283   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:20.204860   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:20.204908   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:20.204920   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:20.204955   16818 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:20.968686   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:21.033050   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:21.033093   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:21.033105   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:21.033124   16818 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:22.513358   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:22.575975   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:22.576015   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:22.576028   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:22.576046   16818 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:23.783803   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:23.847919   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:23.847957   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:23.847968   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:23.847988   16818 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:26.079417   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:26.145394   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:26.145434   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:26.145445   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:26.145471   16818 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:29.253937   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:29.319345   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:29.319409   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:29.319422   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:29.319446   16818 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:34.840011   16818 cli_runner.go:164] Run: docker container inspect docker-flags-212705 --format={{.State.Status}}
	W1025 21:27:34.904486   16818 cli_runner.go:211] docker container inspect docker-flags-212705 --format={{.State.Status}} returned with exit code 1
	I1025 21:27:34.904533   16818 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:34.904545   16818 oci.go:660] temporary error: container docker-flags-212705 status is  but expect it to be exited
	I1025 21:27:34.904568   16818 oci.go:88] couldn't shut down docker-flags-212705 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	 
	I1025 21:27:34.904657   16818 cli_runner.go:164] Run: docker rm -f -v docker-flags-212705
	I1025 21:27:34.970062   16818 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-212705
	W1025 21:27:35.029933   16818 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-212705 returned with exit code 1
	I1025 21:27:35.030019   16818 cli_runner.go:164] Run: docker network inspect docker-flags-212705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:27:35.090461   16818 cli_runner.go:164] Run: docker network rm docker-flags-212705
	W1025 21:27:35.211722   16818 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:27:35.211740   16818 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:27:36.213283   16818 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:27:36.234453   16818 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:27:36.234649   16818 start.go:159] libmachine.API.Create for "docker-flags-212705" (driver="docker")
	I1025 21:27:36.234684   16818 client.go:168] LocalClient.Create starting
	I1025 21:27:36.234867   16818 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:27:36.234945   16818 main.go:134] libmachine: Decoding PEM data...
	I1025 21:27:36.234969   16818 main.go:134] libmachine: Parsing certificate...
	I1025 21:27:36.235053   16818 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:27:36.235113   16818 main.go:134] libmachine: Decoding PEM data...
	I1025 21:27:36.235130   16818 main.go:134] libmachine: Parsing certificate...
	I1025 21:27:36.256556   16818 cli_runner.go:164] Run: docker network inspect docker-flags-212705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:27:36.320446   16818 cli_runner.go:211] docker network inspect docker-flags-212705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:27:36.320541   16818 network_create.go:272] running [docker network inspect docker-flags-212705] to gather additional debugging logs...
	I1025 21:27:36.320562   16818 cli_runner.go:164] Run: docker network inspect docker-flags-212705
	W1025 21:27:36.383323   16818 cli_runner.go:211] docker network inspect docker-flags-212705 returned with exit code 1
	I1025 21:27:36.383356   16818 network_create.go:275] error running [docker network inspect docker-flags-212705]: docker network inspect docker-flags-212705: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-212705
	I1025 21:27:36.383369   16818 network_create.go:277] output of [docker network inspect docker-flags-212705]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-212705
	
	** /stderr **
	I1025 21:27:36.383456   16818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:27:36.445076   16818 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] misses:2}
	I1025 21:27:36.445107   16818 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.445316   16818 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] misses:3}
	I1025 21:27:36.445336   16818 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.445522   16818 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] amended:false}} dirty:map[] misses:0}
	I1025 21:27:36.445529   16818 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.445722   16818 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] amended:false}} dirty:map[] misses:0}
	I1025 21:27:36.445730   16818 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.445915   16818 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8 192.168.85.0:0xc000b04220] misses:0}
	I1025 21:27:36.445931   16818 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.445941   16818 network_create.go:115] attempt to create docker network docker-flags-212705 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:27:36.446005   16818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705
	W1025 21:27:36.508192   16818 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705 returned with exit code 1
	W1025 21:27:36.508290   16818 network_create.go:107] failed to create docker network docker-flags-212705 192.168.85.0/24, will retry: subnet is taken
	I1025 21:27:36.508561   16818 network.go:286] skipping subnet 192.168.85.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8 192.168.85.0:0xc000b04220] misses:1}
	I1025 21:27:36.508580   16818 network.go:244] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.508781   16818 network.go:295] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8] amended:true}} dirty:map[192.168.49.0:0xc0005da5e8 192.168.58.0:0xc0005da640 192.168.67.0:0xc000a99380 192.168.76.0:0xc000a993b8 192.168.85.0:0xc000b04220 192.168.94.0:0xc000b04258] misses:1}
	I1025 21:27:36.508801   16818 network.go:241] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:27:36.508808   16818 network_create.go:115] attempt to create docker network docker-flags-212705 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 21:27:36.508878   16818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212705 docker-flags-212705
	I1025 21:27:36.600962   16818 network_create.go:99] docker network docker-flags-212705 192.168.94.0/24 created
	I1025 21:27:36.600991   16818 kic.go:106] calculated static IP "192.168.94.2" for the "docker-flags-212705" container
	I1025 21:27:36.601096   16818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:27:36.664489   16818 cli_runner.go:164] Run: docker volume create docker-flags-212705 --label name.minikube.sigs.k8s.io=docker-flags-212705 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:27:36.726038   16818 oci.go:103] Successfully created a docker volume docker-flags-212705
	I1025 21:27:36.726174   16818 cli_runner.go:164] Run: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:27:36.870389   16818 cli_runner.go:211] docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:27:36.870445   16818 client.go:171] LocalClient.Create took 635.751789ms
	I1025 21:27:38.870807   16818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:27:38.870905   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:38.936664   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:38.936746   16818 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:39.137422   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:39.200781   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:39.200868   16818 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:39.645433   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:39.710568   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:39.710665   16818 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:40.117218   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:40.182194   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:40.182281   16818 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:40.777979   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:40.842988   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	W1025 21:27:40.843086   16818 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	W1025 21:27:40.843102   16818 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:40.843149   16818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:27:40.843188   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:40.904336   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:40.904421   16818 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:41.173136   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:41.238237   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:41.238324   16818 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:41.749399   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:41.814763   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:41.814859   16818 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:42.263362   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:42.327589   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	W1025 21:27:42.327682   16818 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	W1025 21:27:42.327698   16818 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:42.327705   16818 start.go:128] duration metric: createHost completed in 6.114386376s
	I1025 21:27:42.327788   16818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:27:42.327828   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:42.390656   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:42.390741   16818 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:42.706266   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:42.768506   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:42.768585   16818 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:43.034173   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:43.098213   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:43.098302   16818 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:43.868667   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:43.933942   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	W1025 21:27:43.934023   16818 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	W1025 21:27:43.934038   16818 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:43.934085   16818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:27:43.934143   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:43.994871   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:43.994953   16818 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:44.253310   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:44.327158   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:44.327256   16818 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:44.527777   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:44.591443   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	I1025 21:27:44.591534   16818 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:44.964308   16818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705
	W1025 21:27:45.029211   16818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705 returned with exit code 1
	W1025 21:27:45.029302   16818 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	W1025 21:27:45.029317   16818 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212705": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212705: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	I1025 21:27:45.029325   16818 fix.go:57] fixHost completed within 26.791261687s
	I1025 21:27:45.029331   16818 start.go:83] releasing machines lock for "docker-flags-212705", held for 26.791300472s
	W1025 21:27:45.029485   16818 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-212705" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p docker-flags-212705" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:27:45.073228   16818 out.go:177] 
	W1025 21:27:45.095163   16818 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for docker-flags-212705 container: docker run --rm --name docker-flags-212705-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212705 --entrypoint /usr/bin/test -v docker-flags-212705:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:27:45.095189   16818 out.go:239] * 
	* 
	W1025 21:27:45.096319   16818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:27:45.161044   16818 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-212705 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-212705 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-212705 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (205.796497ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-212705 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-212705 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-212705 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (205.537302ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-212705 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:67: expected "out/minikube-darwin-amd64 -p docker-flags-212705 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:522: *** TestDockerFlags FAILED at 2022-10-25 21:27:45.629444 -0700 PDT m=+4223.246787930
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-212705
helpers_test.go:235: (dbg) docker inspect docker-flags-212705:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-212705",
	        "Id": "51380878e5387ac78df73cf1b5a48e0960849071b7d4104177e8f3465c19cdb4",
	        "Created": "2022-10-26T04:27:36.586047327Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-212705"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-212705 -n docker-flags-212705
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-212705 -n docker-flags-212705: exit status 7 (114.545228ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:27:45.806863   17118 status.go:249] status error: host: state: unknown state "docker-flags-212705": docker container inspect docker-flags-212705 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-212705

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-212705" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-212705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-212705
--- FAIL: TestDockerFlags (40.73s)

                                                
                                    
x
+
TestForceSystemdFlag (40.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-212517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-212517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 80 (39.266804489s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-212517] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-212517 in cluster force-systemd-flag-212517
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-212517" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:25:17.839737   16181 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:25:17.839867   16181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:25:17.839873   16181 out.go:309] Setting ErrFile to fd 2...
	I1025 21:25:17.839877   16181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:25:17.839983   16181 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:25:17.840461   16181 out.go:303] Setting JSON to false
	I1025 21:25:17.855588   16181 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5086,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:25:17.855719   16181 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:25:17.878261   16181 out.go:177] * [force-systemd-flag-212517] minikube v1.27.1 on Darwin 12.6
	I1025 21:25:17.922195   16181 notify.go:220] Checking for updates...
	I1025 21:25:17.943837   16181 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:25:17.964888   16181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:25:17.986212   16181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:25:18.007945   16181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:25:18.029180   16181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:25:18.051854   16181 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:25:18.051992   16181 config.go:180] Loaded profile config "running-upgrade-205554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:25:18.052067   16181 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:25:18.122430   16181 docker.go:137] docker version: linux-20.10.17
	I1025 21:25:18.122584   16181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:25:18.250743   16181 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:25:18.186476519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:25:18.292764   16181 out.go:177] * Using the docker driver based on user configuration
	I1025 21:25:18.313811   16181 start.go:282] selected driver: docker
	I1025 21:25:18.313833   16181 start.go:808] validating driver "docker" against <nil>
	I1025 21:25:18.313867   16181 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:25:18.317190   16181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:25:18.445617   16181 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:25:18.381186276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:25:18.445733   16181 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:25:18.445875   16181 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:25:18.467803   16181 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:25:18.489525   16181 cni.go:95] Creating CNI manager for ""
	I1025 21:25:18.489554   16181 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:25:18.489573   16181 start_flags.go:317] config:
	{Name:force-systemd-flag-212517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:force-systemd-flag-212517 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:25:18.511283   16181 out.go:177] * Starting control plane node force-systemd-flag-212517 in cluster force-systemd-flag-212517
	I1025 21:25:18.553359   16181 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:25:18.574383   16181 out.go:177] * Pulling base image ...
	I1025 21:25:18.595605   16181 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:25:18.595656   16181 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:25:18.595695   16181 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:25:18.595725   16181 cache.go:57] Caching tarball of preloaded images
	I1025 21:25:18.595973   16181 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:25:18.595992   16181 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:25:18.596953   16181 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/force-systemd-flag-212517/config.json ...
	I1025 21:25:18.597082   16181 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/force-systemd-flag-212517/config.json: {Name:mka726e3eb2067104b90140831ff2d6fda90baa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:25:18.658096   16181 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:25:18.658122   16181 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:25:18.658131   16181 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:25:18.658187   16181 start.go:364] acquiring machines lock for force-systemd-flag-212517: {Name:mk4b7019b474748541b78842371fb235a7ca7e3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:25:18.658343   16181 start.go:368] acquired machines lock for "force-systemd-flag-212517" in 143.148µs
	I1025 21:25:18.658368   16181 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-212517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:force-systemd-flag-212517 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:25:18.658445   16181 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:25:18.702003   16181 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:25:18.702369   16181 start.go:159] libmachine.API.Create for "force-systemd-flag-212517" (driver="docker")
	I1025 21:25:18.702412   16181 client.go:168] LocalClient.Create starting
	I1025 21:25:18.702604   16181 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:25:18.702686   16181 main.go:134] libmachine: Decoding PEM data...
	I1025 21:25:18.702715   16181 main.go:134] libmachine: Parsing certificate...
	I1025 21:25:18.702813   16181 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:25:18.702860   16181 main.go:134] libmachine: Decoding PEM data...
	I1025 21:25:18.702880   16181 main.go:134] libmachine: Parsing certificate...
	I1025 21:25:18.703687   16181 cli_runner.go:164] Run: docker network inspect force-systemd-flag-212517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:25:18.765197   16181 cli_runner.go:211] docker network inspect force-systemd-flag-212517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:25:18.765283   16181 network_create.go:272] running [docker network inspect force-systemd-flag-212517] to gather additional debugging logs...
	I1025 21:25:18.765301   16181 cli_runner.go:164] Run: docker network inspect force-systemd-flag-212517
	W1025 21:25:18.826524   16181 cli_runner.go:211] docker network inspect force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:18.826547   16181 network_create.go:275] error running [docker network inspect force-systemd-flag-212517]: docker network inspect force-systemd-flag-212517: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-212517
	I1025 21:25:18.826559   16181 network_create.go:277] output of [docker network inspect force-systemd-flag-212517]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-212517
	
	** /stderr **
	I1025 21:25:18.826655   16181 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:25:18.889144   16181 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8] misses:0}
	I1025 21:25:18.889181   16181 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:18.889194   16181 network_create.go:115] attempt to create docker network force-systemd-flag-212517 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:25:18.889265   16181 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-212517 force-systemd-flag-212517
	W1025 21:25:18.950845   16181 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-212517 force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:18.950881   16181 network_create.go:107] failed to create docker network force-systemd-flag-212517 192.168.49.0/24, will retry: subnet is taken
	I1025 21:25:18.951191   16181 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8] amended:false}} dirty:map[] misses:0}
	I1025 21:25:18.951208   16181 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:18.951395   16181 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608] misses:0}
	I1025 21:25:18.951408   16181 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:18.951418   16181 network_create.go:115] attempt to create docker network force-systemd-flag-212517 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:25:18.951482   16181 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-212517 force-systemd-flag-212517
	W1025 21:25:19.012916   16181 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-212517 force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:19.012963   16181 network_create.go:107] failed to create docker network force-systemd-flag-212517 192.168.58.0/24, will retry: subnet is taken
	I1025 21:25:19.013212   16181 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608] misses:1}
	I1025 21:25:19.013232   16181 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:19.013435   16181 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608 192.168.67.0:0xc00099e888] misses:1}
	I1025 21:25:19.013447   16181 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:19.013453   16181 network_create.go:115] attempt to create docker network force-systemd-flag-212517 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:25:19.013522   16181 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-212517 force-systemd-flag-212517
	I1025 21:25:19.103559   16181 network_create.go:99] docker network force-systemd-flag-212517 192.168.67.0/24 created
	I1025 21:25:19.103686   16181 kic.go:106] calculated static IP "192.168.67.2" for the "force-systemd-flag-212517" container
	I1025 21:25:19.103779   16181 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:25:19.165343   16181 cli_runner.go:164] Run: docker volume create force-systemd-flag-212517 --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:25:19.226926   16181 oci.go:103] Successfully created a docker volume force-systemd-flag-212517
	I1025 21:25:19.227044   16181 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:25:19.442285   16181 cli_runner.go:211] docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:25:19.442403   16181 client.go:171] LocalClient.Create took 739.980764ms
	I1025 21:25:21.444816   16181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:25:21.444945   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:21.509488   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:21.509739   16181 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:21.787503   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:21.856288   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:21.856373   16181 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:22.396953   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:22.462199   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:22.462296   16181 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:23.117915   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:23.180930   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:23.181021   16181 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	W1025 21:25:23.181038   16181 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:23.181106   16181 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:25:23.181158   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:23.240658   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:23.240776   16181 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:23.472706   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:23.538578   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:23.538686   16181 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:23.984032   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:24.050251   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:24.050333   16181 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:24.370798   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:24.435388   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:24.435499   16181 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:24.991736   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:25.053872   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:25.053978   16181 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	W1025 21:25:25.053995   16181 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:25.054005   16181 start.go:128] duration metric: createHost completed in 6.39553967s
	I1025 21:25:25.054014   16181 start.go:83] releasing machines lock for "force-systemd-flag-212517", held for 6.395647371s
	W1025 21:25:25.054028   16181 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:25:25.054420   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:25.114684   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:25.114742   16181 delete.go:82] Unable to get host status for force-systemd-flag-212517, assuming it has already been deleted: state: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	W1025 21:25:25.114944   16181 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:25:25.114956   16181 start.go:618] Will try again in 5 seconds ...
	I1025 21:25:30.117243   16181 start.go:364] acquiring machines lock for force-systemd-flag-212517: {Name:mk4b7019b474748541b78842371fb235a7ca7e3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:25:30.117395   16181 start.go:368] acquired machines lock for "force-systemd-flag-212517" in 118.129µs
	I1025 21:25:30.117427   16181 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:25:30.117442   16181 fix.go:55] fixHost starting: 
	I1025 21:25:30.117821   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:30.181578   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:30.181635   16181 fix.go:103] recreateIfNeeded on force-systemd-flag-212517: state= err=unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:30.181657   16181 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:25:30.203714   16181 out.go:177] * docker "force-systemd-flag-212517" container is missing, will recreate.
	I1025 21:25:30.247336   16181 delete.go:124] DEMOLISHING force-systemd-flag-212517 ...
	I1025 21:25:30.247538   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:30.310190   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	W1025 21:25:30.310229   16181 stop.go:75] unable to get state: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:30.310256   16181 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:30.310611   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:30.370520   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:30.370636   16181 delete.go:82] Unable to get host status for force-systemd-flag-212517, assuming it has already been deleted: state: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:30.370711   16181 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-212517
	W1025 21:25:30.430642   16181 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:30.430669   16181 kic.go:356] could not find the container force-systemd-flag-212517 to remove it. will try anyways
	I1025 21:25:30.430749   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:30.495252   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	W1025 21:25:30.495300   16181 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:30.495376   16181 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-212517 /bin/bash -c "sudo init 0"
	W1025 21:25:30.555481   16181 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-212517 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:25:30.555506   16181 oci.go:646] error shutdown force-systemd-flag-212517: docker exec --privileged -t force-systemd-flag-212517 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:31.557891   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:31.623086   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:31.623138   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:31.623146   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:31.623165   16181 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:32.025960   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:32.088425   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:32.088473   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:32.088482   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:32.088522   16181 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:32.852387   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:32.930686   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:32.930727   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:32.930736   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:32.930755   16181 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:34.410966   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:34.474683   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:34.474723   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:34.474735   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:34.474755   16181 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:35.682476   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:35.748713   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:35.748759   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:35.748767   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:35.748788   16181 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:37.979613   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:38.044594   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:38.044648   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:38.044657   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:38.044676   16181 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:41.153106   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:41.219657   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:41.219700   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:41.219712   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:41.219735   16181 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:46.739226   16181 cli_runner.go:164] Run: docker container inspect force-systemd-flag-212517 --format={{.State.Status}}
	W1025 21:25:46.801821   16181 cli_runner.go:211] docker container inspect force-systemd-flag-212517 --format={{.State.Status}} returned with exit code 1
	I1025 21:25:46.801868   16181 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:46.801880   16181 oci.go:660] temporary error: container force-systemd-flag-212517 status is  but expect it to be exited
	I1025 21:25:46.801906   16181 oci.go:88] couldn't shut down force-systemd-flag-212517 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	 
	I1025 21:25:46.801964   16181 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-212517
	I1025 21:25:46.865787   16181 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-212517
	W1025 21:25:46.926915   16181 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:46.927012   16181 cli_runner.go:164] Run: docker network inspect force-systemd-flag-212517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:25:46.987057   16181 cli_runner.go:164] Run: docker network rm force-systemd-flag-212517
	W1025 21:25:47.085130   16181 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:25:47.085148   16181 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:25:48.086360   16181 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:25:48.108614   16181 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:25:48.108869   16181 start.go:159] libmachine.API.Create for "force-systemd-flag-212517" (driver="docker")
	I1025 21:25:48.108917   16181 client.go:168] LocalClient.Create starting
	I1025 21:25:48.109069   16181 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:25:48.109139   16181 main.go:134] libmachine: Decoding PEM data...
	I1025 21:25:48.109162   16181 main.go:134] libmachine: Parsing certificate...
	I1025 21:25:48.109234   16181 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:25:48.109286   16181 main.go:134] libmachine: Decoding PEM data...
	I1025 21:25:48.109304   16181 main.go:134] libmachine: Parsing certificate...
	I1025 21:25:48.130881   16181 cli_runner.go:164] Run: docker network inspect force-systemd-flag-212517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:25:48.256588   16181 cli_runner.go:211] docker network inspect force-systemd-flag-212517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:25:48.256691   16181 network_create.go:272] running [docker network inspect force-systemd-flag-212517] to gather additional debugging logs...
	I1025 21:25:48.256715   16181 cli_runner.go:164] Run: docker network inspect force-systemd-flag-212517
	W1025 21:25:48.318131   16181 cli_runner.go:211] docker network inspect force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:48.318151   16181 network_create.go:275] error running [docker network inspect force-systemd-flag-212517]: docker network inspect force-systemd-flag-212517: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-212517
	I1025 21:25:48.318169   16181 network_create.go:277] output of [docker network inspect force-systemd-flag-212517]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-212517
	
	** /stderr **
	I1025 21:25:48.318245   16181 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:25:48.379195   16181 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608 192.168.67.0:0xc00099e888] misses:1}
	I1025 21:25:48.379222   16181 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:48.379425   16181 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608 192.168.67.0:0xc00099e888] misses:2}
	I1025 21:25:48.379435   16181 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:48.379619   16181 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608 192.168.67.0:0xc00099e888] amended:false}} dirty:map[] misses:0}
	I1025 21:25:48.379626   16181 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:48.379828   16181 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608 192.168.67.0:0xc00099e888] amended:true}} dirty:map[192.168.49.0:0xc0006b10a8 192.168.58.0:0xc00051c608 192.168.67.0:0xc00099e888 192.168.76.0:0xc00053ca50] misses:0}
	I1025 21:25:48.379842   16181 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:25:48.379849   16181 network_create.go:115] attempt to create docker network force-systemd-flag-212517 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:25:48.379918   16181 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-212517 force-systemd-flag-212517
	I1025 21:25:48.473815   16181 network_create.go:99] docker network force-systemd-flag-212517 192.168.76.0/24 created
	I1025 21:25:48.473854   16181 kic.go:106] calculated static IP "192.168.76.2" for the "force-systemd-flag-212517" container
	I1025 21:25:48.473951   16181 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:25:48.535784   16181 cli_runner.go:164] Run: docker volume create force-systemd-flag-212517 --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:25:48.595936   16181 oci.go:103] Successfully created a docker volume force-systemd-flag-212517
	I1025 21:25:48.596074   16181 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:25:48.729882   16181 cli_runner.go:211] docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:25:48.729924   16181 client.go:171] LocalClient.Create took 620.997494ms
	I1025 21:25:50.732332   16181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:25:50.732481   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:50.797204   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:50.797291   16181 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:50.997975   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:51.062103   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:51.062211   16181 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:51.505824   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:51.570819   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:51.570920   16181 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:51.977495   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:52.041277   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:52.041377   16181 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:52.636412   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:52.698982   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:52.699073   16181 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	W1025 21:25:52.699090   16181 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:52.699147   16181 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:25:52.699193   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:52.759341   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:52.759432   16181 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:53.029608   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:53.094801   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:53.094882   16181 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:53.608172   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:53.674424   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:53.674512   16181 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:54.122974   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:54.186430   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:54.186535   16181 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	W1025 21:25:54.186548   16181 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:54.186555   16181 start.go:128] duration metric: createHost completed in 6.10013617s
	I1025 21:25:54.186613   16181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:25:54.186664   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:54.247419   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:54.247498   16181 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:54.562856   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:54.628511   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:54.628609   16181 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:54.895938   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:54.961423   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:54.961509   16181 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:55.730308   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:55.794080   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:55.794166   16181 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	W1025 21:25:55.794186   16181 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:55.794236   16181 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:25:55.794280   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:55.854260   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:55.854366   16181 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:56.112862   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:56.177144   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:56.177225   16181 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:56.375613   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:56.439852   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	I1025 21:25:56.439944   16181 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:56.812623   16181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517
	W1025 21:25:56.877453   16181 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517 returned with exit code 1
	W1025 21:25:56.877539   16181 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	W1025 21:25:56.877552   16181 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-212517": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-212517: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	I1025 21:25:56.877559   16181 fix.go:57] fixHost completed within 26.760054987s
	I1025 21:25:56.877566   16181 start.go:83] releasing machines lock for "force-systemd-flag-212517", held for 26.760095319s
	W1025 21:25:56.877738   16181 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-212517" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-212517" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:25:56.921371   16181 out.go:177] 
	W1025 21:25:56.943305   16181 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-flag-212517 container: docker run --rm --name force-systemd-flag-212517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-212517 --entrypoint /usr/bin/test -v force-systemd-flag-212517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:25:56.943334   16181 out.go:239] * 
	* 
	W1025 21:25:56.944591   16181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:25:57.030333   16181 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-212517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-212517 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-212517 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (203.46417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-212517 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-10-25 21:25:57.270079 -0700 PDT m=+4114.887682402
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-212517
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-212517:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-212517",
	        "Id": "5aa9f20b444858a5c19b63740c3bb15f7b37fa935c69c592c9464e4389f17609",
	        "Created": "2022-10-26T04:25:48.444214305Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-212517"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-212517 -n force-systemd-flag-212517
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-212517 -n force-systemd-flag-212517: exit status 7 (112.474418ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:25:57.446751   16390 status.go:249] status error: host: state: unknown state "force-systemd-flag-212517": docker container inspect force-systemd-flag-212517 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-212517

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-212517" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-212517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-212517
--- FAIL: TestForceSystemdFlag (40.48s)

                                                
                                    
x
+
TestForceSystemdEnv (40.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-212625 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-212625 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 80 (39.355102017s)

                                                
                                                
-- stdout --
	* [force-systemd-env-212625] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-212625 in cluster force-systemd-env-212625
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-212625" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:26:25.326849   16492 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:26:25.327086   16492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:26:25.327091   16492 out.go:309] Setting ErrFile to fd 2...
	I1025 21:26:25.327095   16492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:26:25.327212   16492 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:26:25.327677   16492 out.go:303] Setting JSON to false
	I1025 21:26:25.342232   16492 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5154,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:26:25.342347   16492 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:26:25.370623   16492 out.go:177] * [force-systemd-env-212625] minikube v1.27.1 on Darwin 12.6
	I1025 21:26:25.390952   16492 notify.go:220] Checking for updates...
	I1025 21:26:25.411890   16492 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:26:25.433764   16492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:26:25.454966   16492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:26:25.475794   16492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:26:25.496731   16492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:26:25.523054   16492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1025 21:26:25.545661   16492 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:26:25.545800   16492 config.go:180] Loaded profile config "running-upgrade-205554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:26:25.545866   16492 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:26:25.614190   16492 docker.go:137] docker version: linux-20.10.17
	I1025 21:26:25.614336   16492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:26:25.741279   16492 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:26:25.68305896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:26:25.784893   16492 out.go:177] * Using the docker driver based on user configuration
	I1025 21:26:25.805745   16492 start.go:282] selected driver: docker
	I1025 21:26:25.805819   16492 start.go:808] validating driver "docker" against <nil>
	I1025 21:26:25.805841   16492 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:26:25.808804   16492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:26:25.936368   16492 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:26:25.879016791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:26:25.936501   16492 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:26:25.936634   16492 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:26:25.958341   16492 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:26:25.980355   16492 cni.go:95] Creating CNI manager for ""
	I1025 21:26:25.980384   16492 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:26:25.980400   16492 start_flags.go:317] config:
	{Name:force-systemd-env-212625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:force-systemd-env-212625 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:26:26.002271   16492 out.go:177] * Starting control plane node force-systemd-env-212625 in cluster force-systemd-env-212625
	I1025 21:26:26.044375   16492 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:26:26.066233   16492 out.go:177] * Pulling base image ...
	I1025 21:26:26.108299   16492 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:26:26.108313   16492 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:26:26.108372   16492 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:26:26.108389   16492 cache.go:57] Caching tarball of preloaded images
	I1025 21:26:26.108580   16492 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:26:26.108603   16492 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:26:26.109570   16492 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/force-systemd-env-212625/config.json ...
	I1025 21:26:26.109685   16492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/force-systemd-env-212625/config.json: {Name:mkb379498550fd28c4600f4e5608f58819624fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:26:26.172935   16492 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:26:26.172961   16492 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:26:26.172998   16492 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:26:26.173041   16492 start.go:364] acquiring machines lock for force-systemd-env-212625: {Name:mk7110b3c6cc537da0112b27a209c88d600c91b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:26:26.173185   16492 start.go:368] acquired machines lock for "force-systemd-env-212625" in 132.851µs
	I1025 21:26:26.173210   16492 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-212625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:force-systemd-env-212625 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:26:26.173288   16492 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:26:26.215938   16492 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:26:26.216368   16492 start.go:159] libmachine.API.Create for "force-systemd-env-212625" (driver="docker")
	I1025 21:26:26.216408   16492 client.go:168] LocalClient.Create starting
	I1025 21:26:26.216579   16492 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:26:26.216676   16492 main.go:134] libmachine: Decoding PEM data...
	I1025 21:26:26.216707   16492 main.go:134] libmachine: Parsing certificate...
	I1025 21:26:26.216805   16492 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:26:26.216851   16492 main.go:134] libmachine: Decoding PEM data...
	I1025 21:26:26.216868   16492 main.go:134] libmachine: Parsing certificate...
	I1025 21:26:26.217683   16492 cli_runner.go:164] Run: docker network inspect force-systemd-env-212625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:26:26.279696   16492 cli_runner.go:211] docker network inspect force-systemd-env-212625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:26:26.279780   16492 network_create.go:272] running [docker network inspect force-systemd-env-212625] to gather additional debugging logs...
	I1025 21:26:26.279794   16492 cli_runner.go:164] Run: docker network inspect force-systemd-env-212625
	W1025 21:26:26.339613   16492 cli_runner.go:211] docker network inspect force-systemd-env-212625 returned with exit code 1
	I1025 21:26:26.339637   16492 network_create.go:275] error running [docker network inspect force-systemd-env-212625]: docker network inspect force-systemd-env-212625: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-212625
	I1025 21:26:26.339651   16492 network_create.go:277] output of [docker network inspect force-systemd-env-212625]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-212625
	
	** /stderr **
	I1025 21:26:26.339767   16492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:26:26.401485   16492 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00055ab18] misses:0}
	I1025 21:26:26.401521   16492 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:26.401534   16492 network_create.go:115] attempt to create docker network force-systemd-env-212625 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:26:26.401619   16492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-212625 force-systemd-env-212625
	W1025 21:26:26.462113   16492 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-212625 force-systemd-env-212625 returned with exit code 1
	W1025 21:26:26.462145   16492 network_create.go:107] failed to create docker network force-systemd-env-212625 192.168.49.0/24, will retry: subnet is taken
	I1025 21:26:26.462428   16492 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18] amended:false}} dirty:map[] misses:0}
	I1025 21:26:26.462445   16492 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:26.462680   16492 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18] amended:true}} dirty:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630] misses:0}
	I1025 21:26:26.462693   16492 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:26.462701   16492 network_create.go:115] attempt to create docker network force-systemd-env-212625 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:26:26.462760   16492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-212625 force-systemd-env-212625
	W1025 21:26:26.523482   16492 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-212625 force-systemd-env-212625 returned with exit code 1
	W1025 21:26:26.523514   16492 network_create.go:107] failed to create docker network force-systemd-env-212625 192.168.58.0/24, will retry: subnet is taken
	I1025 21:26:26.523809   16492 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18] amended:true}} dirty:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630] misses:1}
	I1025 21:26:26.523826   16492 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:26.524028   16492 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18] amended:true}} dirty:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630 192.168.67.0:0xc00055ab80] misses:1}
	I1025 21:26:26.524038   16492 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:26.524046   16492 network_create.go:115] attempt to create docker network force-systemd-env-212625 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:26:26.524127   16492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-212625 force-systemd-env-212625
	I1025 21:26:26.616424   16492 network_create.go:99] docker network force-systemd-env-212625 192.168.67.0/24 created
	I1025 21:26:26.616456   16492 kic.go:106] calculated static IP "192.168.67.2" for the "force-systemd-env-212625" container
	I1025 21:26:26.616558   16492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:26:26.677874   16492 cli_runner.go:164] Run: docker volume create force-systemd-env-212625 --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:26:26.738987   16492 oci.go:103] Successfully created a docker volume force-systemd-env-212625
	I1025 21:26:26.739095   16492 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:26:26.948570   16492 cli_runner.go:211] docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:26:26.948626   16492 client.go:171] LocalClient.Create took 732.207942ms
	I1025 21:26:28.951073   16492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:26:28.951194   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:29.014455   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:29.014546   16492 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:29.292578   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:29.358807   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:29.358888   16492 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:29.899941   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:29.963994   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:29.964087   16492 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:30.621483   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:30.684920   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	W1025 21:26:30.685004   16492 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	W1025 21:26:30.685030   16492 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:30.685081   16492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:26:30.685126   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:30.744480   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:30.744563   16492 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:30.978071   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:31.043789   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:31.043870   16492 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:31.490175   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:31.554896   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:31.554982   16492 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:31.875525   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:31.942793   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:31.943500   16492 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:32.499808   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:32.567803   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	W1025 21:26:32.567914   16492 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	W1025 21:26:32.567938   16492 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:32.567948   16492 start.go:128] duration metric: createHost completed in 6.39463782s
	I1025 21:26:32.567957   16492 start.go:83] releasing machines lock for "force-systemd-env-212625", held for 6.394749253s
	W1025 21:26:32.567974   16492 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:26:32.568382   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:32.628837   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:32.628894   16492 delete.go:82] Unable to get host status for force-systemd-env-212625, assuming it has already been deleted: state: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	W1025 21:26:32.629077   16492 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:26:32.629094   16492 start.go:618] Will try again in 5 seconds ...
	I1025 21:26:37.629481   16492 start.go:364] acquiring machines lock for force-systemd-env-212625: {Name:mk7110b3c6cc537da0112b27a209c88d600c91b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:26:37.629629   16492 start.go:368] acquired machines lock for "force-systemd-env-212625" in 110.703µs
	I1025 21:26:37.629659   16492 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:26:37.629673   16492 fix.go:55] fixHost starting: 
	I1025 21:26:37.630038   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:37.694342   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:37.694401   16492 fix.go:103] recreateIfNeeded on force-systemd-env-212625: state= err=unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:37.694423   16492 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:26:37.716438   16492 out.go:177] * docker "force-systemd-env-212625" container is missing, will recreate.
	I1025 21:26:37.758929   16492 delete.go:124] DEMOLISHING force-systemd-env-212625 ...
	I1025 21:26:37.759116   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:37.819887   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	W1025 21:26:37.819929   16492 stop.go:75] unable to get state: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:37.819949   16492 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:37.820330   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:37.880957   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:37.881007   16492 delete.go:82] Unable to get host status for force-systemd-env-212625, assuming it has already been deleted: state: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:37.881099   16492 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-212625
	W1025 21:26:37.941314   16492 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-212625 returned with exit code 1
	I1025 21:26:37.941355   16492 kic.go:356] could not find the container force-systemd-env-212625 to remove it. will try anyways
	I1025 21:26:37.941449   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:38.002326   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	W1025 21:26:38.002366   16492 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:38.002438   16492 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-212625 /bin/bash -c "sudo init 0"
	W1025 21:26:38.061684   16492 cli_runner.go:211] docker exec --privileged -t force-systemd-env-212625 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:26:38.061708   16492 oci.go:646] error shutdown force-systemd-env-212625: docker exec --privileged -t force-systemd-env-212625 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:39.064077   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:39.128179   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:39.128224   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:39.128234   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:39.128253   16492 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:39.531114   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:39.593619   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:39.593665   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:39.593677   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:39.593705   16492 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:40.356971   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:40.422118   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:40.422169   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:40.422180   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:40.422202   16492 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:41.902374   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:41.964333   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:41.964388   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:41.964397   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:41.964418   16492 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:43.171739   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:43.237714   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:43.237759   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:43.237770   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:43.237793   16492 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:45.469339   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:45.533975   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:45.534033   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:45.534050   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:45.534072   16492 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:48.641410   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:48.704180   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:48.704230   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:48.704240   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:48.704279   16492 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:54.222748   16492 cli_runner.go:164] Run: docker container inspect force-systemd-env-212625 --format={{.State.Status}}
	W1025 21:26:54.284763   16492 cli_runner.go:211] docker container inspect force-systemd-env-212625 --format={{.State.Status}} returned with exit code 1
	I1025 21:26:54.284830   16492 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:54.284840   16492 oci.go:660] temporary error: container force-systemd-env-212625 status is  but expect it to be exited
	I1025 21:26:54.284888   16492 oci.go:88] couldn't shut down force-systemd-env-212625 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	 
	I1025 21:26:54.284952   16492 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-212625
	I1025 21:26:54.348406   16492 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-212625
	W1025 21:26:54.408899   16492 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-212625 returned with exit code 1
	I1025 21:26:54.409010   16492 cli_runner.go:164] Run: docker network inspect force-systemd-env-212625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:26:54.470102   16492 cli_runner.go:164] Run: docker network rm force-systemd-env-212625
	W1025 21:26:54.583442   16492 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:26:54.583460   16492 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:26:55.585621   16492 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:26:55.607744   16492 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:26:55.607912   16492 start.go:159] libmachine.API.Create for "force-systemd-env-212625" (driver="docker")
	I1025 21:26:55.607940   16492 client.go:168] LocalClient.Create starting
	I1025 21:26:55.608098   16492 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:26:55.608203   16492 main.go:134] libmachine: Decoding PEM data...
	I1025 21:26:55.608233   16492 main.go:134] libmachine: Parsing certificate...
	I1025 21:26:55.608307   16492 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:26:55.608350   16492 main.go:134] libmachine: Decoding PEM data...
	I1025 21:26:55.608365   16492 main.go:134] libmachine: Parsing certificate...
	I1025 21:26:55.608971   16492 cli_runner.go:164] Run: docker network inspect force-systemd-env-212625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:26:55.674151   16492 cli_runner.go:211] docker network inspect force-systemd-env-212625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:26:55.674236   16492 network_create.go:272] running [docker network inspect force-systemd-env-212625] to gather additional debugging logs...
	I1025 21:26:55.674258   16492 cli_runner.go:164] Run: docker network inspect force-systemd-env-212625
	W1025 21:26:55.734751   16492 cli_runner.go:211] docker network inspect force-systemd-env-212625 returned with exit code 1
	I1025 21:26:55.734775   16492 network_create.go:275] error running [docker network inspect force-systemd-env-212625]: docker network inspect force-systemd-env-212625: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-212625
	I1025 21:26:55.734789   16492 network_create.go:277] output of [docker network inspect force-systemd-env-212625]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-212625
	
	** /stderr **
	I1025 21:26:55.734885   16492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:26:55.796115   16492 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18] amended:true}} dirty:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630 192.168.67.0:0xc00055ab80] misses:1}
	I1025 21:26:55.796143   16492 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:55.796348   16492 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18] amended:true}} dirty:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630 192.168.67.0:0xc00055ab80] misses:2}
	I1025 21:26:55.796357   16492 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:55.796548   16492 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630 192.168.67.0:0xc00055ab80] amended:false}} dirty:map[] misses:0}
	I1025 21:26:55.796556   16492 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:55.796760   16492 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630 192.168.67.0:0xc00055ab80] amended:true}} dirty:map[192.168.49.0:0xc00055ab18 192.168.58.0:0xc000d00630 192.168.67.0:0xc00055ab80 192.168.76.0:0xc00055a3d8] misses:0}
	I1025 21:26:55.796782   16492 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:26:55.796792   16492 network_create.go:115] attempt to create docker network force-systemd-env-212625 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:26:55.796858   16492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-212625 force-systemd-env-212625
	I1025 21:26:55.887654   16492 network_create.go:99] docker network force-systemd-env-212625 192.168.76.0/24 created
	I1025 21:26:55.887684   16492 kic.go:106] calculated static IP "192.168.76.2" for the "force-systemd-env-212625" container
	I1025 21:26:55.887800   16492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:26:55.949692   16492 cli_runner.go:164] Run: docker volume create force-systemd-env-212625 --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:26:56.010209   16492 oci.go:103] Successfully created a docker volume force-systemd-env-212625
	I1025 21:26:56.010321   16492 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:26:56.144213   16492 cli_runner.go:211] docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:26:56.144257   16492 client.go:171] LocalClient.Create took 536.309523ms
	I1025 21:26:58.146698   16492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:26:58.146793   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:58.263311   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:58.263398   16492 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:58.463806   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:58.526374   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:58.526481   16492 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:58.969423   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:59.031414   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:59.031516   16492 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:26:59.437881   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:26:59.501104   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:26:59.501211   16492 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:00.096151   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:00.163073   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	W1025 21:27:00.163182   16492 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	W1025 21:27:00.163197   16492 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:00.163248   16492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:27:00.163291   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:00.223378   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:00.223460   16492 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:00.493509   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:00.558401   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:00.558495   16492 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:01.071553   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:01.134216   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:01.134311   16492 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:01.580951   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:01.641686   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	W1025 21:27:01.641792   16492 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	W1025 21:27:01.641809   16492 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:01.641817   16492 start.go:128] duration metric: createHost completed in 6.056136416s
	I1025 21:27:01.641878   16492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:27:01.641919   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:01.701915   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:01.702018   16492 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:02.017138   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:02.080300   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:02.080400   16492 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:02.345616   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:02.409410   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:02.409518   16492 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:03.178372   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:03.373805   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	W1025 21:27:03.373981   16492 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	W1025 21:27:03.374010   16492 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:03.374119   16492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:27:03.374220   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:03.437910   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:03.438081   16492 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:03.696206   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:03.756992   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:03.757082   16492 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:03.957551   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:04.025156   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	I1025 21:27:04.025244   16492 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:04.397685   16492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625
	W1025 21:27:04.458604   16492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625 returned with exit code 1
	W1025 21:27:04.458713   16492 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	W1025 21:27:04.458731   16492 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-212625": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-212625: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	I1025 21:27:04.458747   16492 fix.go:57] fixHost completed within 26.829010508s
	I1025 21:27:04.458754   16492 start.go:83] releasing machines lock for "force-systemd-env-212625", held for 26.829049853s
	W1025 21:27:04.458923   16492 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-212625" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-212625" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:27:04.500187   16492 out.go:177] 
	W1025 21:27:04.521518   16492 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for force-systemd-env-212625 container: docker run --rm --name force-systemd-env-212625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-212625 --entrypoint /usr/bin/test -v force-systemd-env-212625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:27:04.521560   16492 out.go:239] * 
	* 
	W1025 21:27:04.522840   16492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:27:04.608254   16492 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-212625 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-212625 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-212625 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (245.336308ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-212625 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2022-10-25 21:27:04.890852 -0700 PDT m=+4182.508293586
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-212625
helpers_test.go:235: (dbg) docker inspect force-systemd-env-212625:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-212625",
	        "Id": "0207270b547c897681c8f5ad57e23b59a9245b591f932589aab05d1cd1c75384",
	        "Created": "2022-10-26T04:26:55.866515698Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-212625"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-212625 -n force-systemd-env-212625
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-212625 -n force-systemd-env-212625: exit status 7 (116.379729ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:27:05.070811   16792 status.go:249] status error: host: state: unknown state "force-systemd-env-212625": docker container inspect force-systemd-env-212625 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-212625

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-212625" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-212625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-212625
--- FAIL: TestForceSystemdEnv (40.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (255.12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-202633 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1025 20:26:34.373814    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:27:56.296961    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:30:12.452091    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:30:12.988296    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:12.994839    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:13.007073    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:13.027512    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:13.067911    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:13.148363    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:13.310715    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:13.633041    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:14.273199    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:15.554224    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:18.114595    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:23.236960    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:33.479299    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:30:40.139091    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-202633 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m15.092580048s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-202633] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-202633 in cluster ingress-addon-legacy-202633
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.18 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:26:34.044531    5426 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:26:34.044719    5426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:26:34.044724    5426 out.go:309] Setting ErrFile to fd 2...
	I1025 20:26:34.044729    5426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:26:34.044839    5426 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:26:34.045316    5426 out.go:303] Setting JSON to false
	I1025 20:26:34.059959    5426 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1563,"bootTime":1666753231,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:26:34.060053    5426 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:26:34.081812    5426 out.go:177] * [ingress-addon-legacy-202633] minikube v1.27.1 on Darwin 12.6
	I1025 20:26:34.130358    5426 notify.go:220] Checking for updates...
	I1025 20:26:34.151642    5426 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:26:34.173697    5426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:26:34.194723    5426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:26:34.215990    5426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:26:34.238580    5426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:26:34.260074    5426 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:26:34.328033    5426 docker.go:137] docker version: linux-20.10.17
	I1025 20:26:34.328194    5426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:26:34.459477    5426 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2022-10-26 03:26:34.40411197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:26:34.481436    5426 out.go:177] * Using the docker driver based on user configuration
	I1025 20:26:34.503197    5426 start.go:282] selected driver: docker
	I1025 20:26:34.503232    5426 start.go:808] validating driver "docker" against <nil>
	I1025 20:26:34.503254    5426 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:26:34.506706    5426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:26:34.635936    5426 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2022-10-26 03:26:34.581172407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:26:34.636048    5426 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 20:26:34.636186    5426 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 20:26:34.659744    5426 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 20:26:34.681347    5426 cni.go:95] Creating CNI manager for ""
	I1025 20:26:34.681391    5426 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 20:26:34.681408    5426 start_flags.go:317] config:
	{Name:ingress-addon-legacy-202633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:26:34.703381    5426 out.go:177] * Starting control plane node ingress-addon-legacy-202633 in cluster ingress-addon-legacy-202633
	I1025 20:26:34.746520    5426 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:26:34.768220    5426 out.go:177] * Pulling base image ...
	I1025 20:26:34.810296    5426 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 20:26:34.810313    5426 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:26:34.863925    5426 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1025 20:26:34.863945    5426 cache.go:57] Caching tarball of preloaded images
	I1025 20:26:34.864151    5426 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 20:26:34.885746    5426 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1025 20:26:34.907738    5426 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:26:34.915488    5426 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:26:34.915507    5426 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:26:34.992976    5426 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1025 20:26:40.387006    5426 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:26:40.387200    5426 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:26:41.008901    5426 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1025 20:26:41.009166    5426 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/config.json ...
	I1025 20:26:41.009186    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/config.json: {Name:mk3177aa7211209cfdff6f78941c07905be3fc45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:41.009528    5426 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:26:41.009554    5426 start.go:364] acquiring machines lock for ingress-addon-legacy-202633: {Name:mk5a1e41598c65e6740a7f4a01c1399296528ce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:26:41.009685    5426 start.go:368] acquired machines lock for "ingress-addon-legacy-202633" in 123.166µs
	I1025 20:26:41.009705    5426 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-202633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202633 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 20:26:41.009809    5426 start.go:125] createHost starting for "" (driver="docker")
	I1025 20:26:41.052909    5426 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 20:26:41.053306    5426 start.go:159] libmachine.API.Create for "ingress-addon-legacy-202633" (driver="docker")
	I1025 20:26:41.053346    5426 client.go:168] LocalClient.Create starting
	I1025 20:26:41.053485    5426 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 20:26:41.053551    5426 main.go:134] libmachine: Decoding PEM data...
	I1025 20:26:41.053580    5426 main.go:134] libmachine: Parsing certificate...
	I1025 20:26:41.053680    5426 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 20:26:41.053731    5426 main.go:134] libmachine: Decoding PEM data...
	I1025 20:26:41.053750    5426 main.go:134] libmachine: Parsing certificate...
	I1025 20:26:41.054572    5426 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-202633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 20:26:41.118895    5426 cli_runner.go:211] docker network inspect ingress-addon-legacy-202633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 20:26:41.119056    5426 network_create.go:272] running [docker network inspect ingress-addon-legacy-202633] to gather additional debugging logs...
	I1025 20:26:41.119086    5426 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-202633
	W1025 20:26:41.181365    5426 cli_runner.go:211] docker network inspect ingress-addon-legacy-202633 returned with exit code 1
	I1025 20:26:41.181398    5426 network_create.go:275] error running [docker network inspect ingress-addon-legacy-202633]: docker network inspect ingress-addon-legacy-202633: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-202633
	I1025 20:26:41.181428    5426 network_create.go:277] output of [docker network inspect ingress-addon-legacy-202633]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-202633
	
	** /stderr **
	I1025 20:26:41.181524    5426 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:26:41.243644    5426 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c7aca0] misses:0}
	I1025 20:26:41.243686    5426 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:26:41.243701    5426 network_create.go:115] attempt to create docker network ingress-addon-legacy-202633 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 20:26:41.243773    5426 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-202633 ingress-addon-legacy-202633
	I1025 20:26:41.334188    5426 network_create.go:99] docker network ingress-addon-legacy-202633 192.168.49.0/24 created
	I1025 20:26:41.334251    5426 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-202633" container
	I1025 20:26:41.334343    5426 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 20:26:41.395227    5426 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-202633 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-202633 --label created_by.minikube.sigs.k8s.io=true
	I1025 20:26:41.460427    5426 oci.go:103] Successfully created a docker volume ingress-addon-legacy-202633
	I1025 20:26:41.460633    5426 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-202633-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-202633 --entrypoint /usr/bin/test -v ingress-addon-legacy-202633:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	I1025 20:26:41.909676    5426 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-202633
	I1025 20:26:41.909841    5426 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 20:26:41.909856    5426 kic.go:179] Starting extracting preloaded images to volume ...
	I1025 20:26:41.909984    5426 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-202633:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 20:26:46.269783    5426 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-202633:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir: (4.359691108s)
	I1025 20:26:46.269881    5426 kic.go:188] duration metric: took 4.359979 seconds to extract preloaded images to volume
	I1025 20:26:46.269981    5426 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 20:26:46.399262    5426 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-202633 --name ingress-addon-legacy-202633 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-202633 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-202633 --network ingress-addon-legacy-202633 --ip 192.168.49.2 --volume ingress-addon-legacy-202633:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191
	I1025 20:26:46.754862    5426 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-202633 --format={{.State.Running}}
	I1025 20:26:46.821649    5426 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-202633 --format={{.State.Status}}
	I1025 20:26:46.891724    5426 cli_runner.go:164] Run: docker exec ingress-addon-legacy-202633 stat /var/lib/dpkg/alternatives/iptables
	I1025 20:26:47.003818    5426 oci.go:144] the created container "ingress-addon-legacy-202633" has a running status.
	I1025 20:26:47.003867    5426 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa...
	I1025 20:26:47.123290    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 20:26:47.123343    5426 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 20:26:47.233578    5426 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-202633 --format={{.State.Status}}
	I1025 20:26:47.296777    5426 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 20:26:47.296795    5426 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-202633 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 20:26:47.410086    5426 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-202633 --format={{.State.Status}}
	I1025 20:26:47.472013    5426 machine.go:88] provisioning docker machine ...
	I1025 20:26:47.472067    5426 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-202633"
	I1025 20:26:47.472175    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:47.533731    5426 main.go:134] libmachine: Using SSH client type: native
	I1025 20:26:47.533914    5426 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 50430 <nil> <nil>}
	I1025 20:26:47.533933    5426 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-202633 && echo "ingress-addon-legacy-202633" | sudo tee /etc/hostname
	I1025 20:26:47.662602    5426 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-202633
	
	I1025 20:26:47.662679    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:47.725217    5426 main.go:134] libmachine: Using SSH client type: native
	I1025 20:26:47.725538    5426 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 50430 <nil> <nil>}
	I1025 20:26:47.725554    5426 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-202633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-202633/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-202633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 20:26:47.846769    5426 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:26:47.846804    5426 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/14956-2080/.minikube CaCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/14956-2080/.minikube}
	I1025 20:26:47.846821    5426 ubuntu.go:177] setting up certificates
	I1025 20:26:47.846831    5426 provision.go:83] configureAuth start
	I1025 20:26:47.846907    5426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-202633
	I1025 20:26:47.910589    5426 provision.go:138] copyHostCerts
	I1025 20:26:47.910653    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:26:47.910703    5426 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem, removing ...
	I1025 20:26:47.910709    5426 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:26:47.910815    5426 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem (1078 bytes)
	I1025 20:26:47.910998    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:26:47.911034    5426 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem, removing ...
	I1025 20:26:47.911038    5426 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:26:47.911095    5426 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem (1123 bytes)
	I1025 20:26:47.911208    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:26:47.911239    5426 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem, removing ...
	I1025 20:26:47.911243    5426 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:26:47.911295    5426 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem (1679 bytes)
	I1025 20:26:47.911408    5426 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-202633 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-202633]
	I1025 20:26:47.994257    5426 provision.go:172] copyRemoteCerts
	I1025 20:26:47.994305    5426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 20:26:47.994350    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:48.057611    5426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:26:48.146049    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 20:26:48.146135    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1025 20:26:48.162626    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 20:26:48.162694    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1025 20:26:48.178914    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 20:26:48.178986    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 20:26:48.195826    5426 provision.go:86] duration metric: configureAuth took 348.978495ms
	I1025 20:26:48.195838    5426 ubuntu.go:193] setting minikube options for container-runtime
	I1025 20:26:48.195988    5426 config.go:180] Loaded profile config "ingress-addon-legacy-202633": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 20:26:48.196037    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:48.258573    5426 main.go:134] libmachine: Using SSH client type: native
	I1025 20:26:48.259096    5426 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 50430 <nil> <nil>}
	I1025 20:26:48.259119    5426 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 20:26:48.385601    5426 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 20:26:48.385617    5426 ubuntu.go:71] root file system type: overlay
	I1025 20:26:48.385769    5426 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 20:26:48.385842    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:48.447760    5426 main.go:134] libmachine: Using SSH client type: native
	I1025 20:26:48.447919    5426 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 50430 <nil> <nil>}
	I1025 20:26:48.447968    5426 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 20:26:48.585720    5426 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 20:26:48.585793    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:48.648852    5426 main.go:134] libmachine: Using SSH client type: native
	I1025 20:26:48.649030    5426 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 50430 <nil> <nil>}
	I1025 20:26:48.649048    5426 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 20:26:49.224575    5426 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-09-08 23:09:37.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-26 03:26:48.590361354 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 20:26:49.224600    5426 machine.go:91] provisioned docker machine in 1.752550643s
	I1025 20:26:49.224606    5426 client.go:171] LocalClient.Create took 8.171168866s
	I1025 20:26:49.224648    5426 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-202633" took 8.171256933s
	I1025 20:26:49.224664    5426 start.go:300] post-start starting for "ingress-addon-legacy-202633" (driver="docker")
	I1025 20:26:49.224669    5426 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 20:26:49.224725    5426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 20:26:49.224772    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:49.287841    5426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:26:49.377112    5426 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 20:26:49.380748    5426 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 20:26:49.380766    5426 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 20:26:49.380775    5426 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 20:26:49.380782    5426 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 20:26:49.380790    5426 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/addons for local assets ...
	I1025 20:26:49.380886    5426 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/files for local assets ...
	I1025 20:26:49.381022    5426 filesync.go:149] local asset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> 29162.pem in /etc/ssl/certs
	I1025 20:26:49.381030    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /etc/ssl/certs/29162.pem
	I1025 20:26:49.381170    5426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 20:26:49.388478    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:26:49.405173    5426 start.go:303] post-start completed in 180.499063ms
	I1025 20:26:49.405673    5426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-202633
	I1025 20:26:49.467736    5426 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/config.json ...
	I1025 20:26:49.468112    5426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:26:49.468161    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:49.532249    5426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:26:49.617497    5426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:26:49.621734    5426 start.go:128] duration metric: createHost completed in 8.61182813s
	I1025 20:26:49.621748    5426 start.go:83] releasing machines lock for "ingress-addon-legacy-202633", held for 8.611966044s
	I1025 20:26:49.621811    5426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-202633
	I1025 20:26:49.685416    5426 ssh_runner.go:195] Run: systemctl --version
	I1025 20:26:49.685430    5426 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1025 20:26:49.685484    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:49.685496    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:49.752555    5426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:26:49.752663    5426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:26:50.027894    5426 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 20:26:50.038213    5426 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 20:26:50.038267    5426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 20:26:50.046757    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 20:26:50.059166    5426 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 20:26:50.127015    5426 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 20:26:50.193955    5426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:26:50.259058    5426 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 20:26:50.452981    5426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:26:50.480287    5426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:26:50.551697    5426 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.18 ...
	I1025 20:26:50.551880    5426 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-202633 dig +short host.docker.internal
	I1025 20:26:50.671229    5426 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 20:26:50.671417    5426 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 20:26:50.675434    5426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:26:50.684616    5426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:26:50.750028    5426 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 20:26:50.750100    5426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 20:26:50.772143    5426 docker.go:612] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1025 20:26:50.772164    5426 docker.go:543] Images already preloaded, skipping extraction
	I1025 20:26:50.772232    5426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 20:26:50.792994    5426 docker.go:612] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1025 20:26:50.793017    5426 cache_images.go:84] Images are preloaded, skipping loading
	I1025 20:26:50.793084    5426 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 20:26:50.855754    5426 cni.go:95] Creating CNI manager for ""
	I1025 20:26:50.855767    5426 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 20:26:50.855779    5426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 20:26:50.855794    5426 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-202633 NodeName:ingress-addon-legacy-202633 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 20:26:50.855900    5426 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-202633"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 20:26:50.855984    5426 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-202633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 20:26:50.856041    5426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1025 20:26:50.863558    5426 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 20:26:50.863602    5426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 20:26:50.870412    5426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1025 20:26:50.882764    5426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1025 20:26:50.894953    5426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1025 20:26:50.906815    5426 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 20:26:50.910530    5426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:26:50.919693    5426 certs.go:54] Setting up /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633 for IP: 192.168.49.2
	I1025 20:26:50.919797    5426 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key
	I1025 20:26:50.919846    5426 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key
	I1025 20:26:50.919881    5426 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/client.key
	I1025 20:26:50.919892    5426 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/client.crt with IP's: []
	I1025 20:26:51.028688    5426 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/client.crt ...
	I1025 20:26:51.028697    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/client.crt: {Name:mk1cc88859f7ea0df77fa1f0941b8b30b1a0a5d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:51.028985    5426 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/client.key ...
	I1025 20:26:51.028997    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/client.key: {Name:mk17f7962a2183e29d8f1077824cdac6682e40c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:51.029200    5426 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key.dd3b5fb2
	I1025 20:26:51.029214    5426 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 20:26:51.204315    5426 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt.dd3b5fb2 ...
	I1025 20:26:51.204326    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt.dd3b5fb2: {Name:mk756fc1248f33b406c32bc107939a14dfd00b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:51.204559    5426 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key.dd3b5fb2 ...
	I1025 20:26:51.204566    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key.dd3b5fb2: {Name:mk9d32f4cb95636a7336a5d5049cac263848e25e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:51.204754    5426 certs.go:320] copying /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt
	I1025 20:26:51.204916    5426 certs.go:324] copying /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key
	I1025 20:26:51.205054    5426 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.key
	I1025 20:26:51.205068    5426 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.crt with IP's: []
	I1025 20:26:51.341055    5426 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.crt ...
	I1025 20:26:51.341066    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.crt: {Name:mk9bc008d3b7c91473dfd3cb643f87c8014bfbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:51.341385    5426 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.key ...
	I1025 20:26:51.341395    5426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.key: {Name:mk3780ba2de13a9e6e9fe51bd2d137995d529867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:26:51.341613    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 20:26:51.341638    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 20:26:51.341654    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 20:26:51.341672    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 20:26:51.341688    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 20:26:51.341709    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 20:26:51.341728    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 20:26:51.341747    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 20:26:51.341837    5426 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem (1338 bytes)
	W1025 20:26:51.341874    5426 certs.go:384] ignoring /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916_empty.pem, impossibly tiny 0 bytes
	I1025 20:26:51.341881    5426 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 20:26:51.341912    5426 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem (1078 bytes)
	I1025 20:26:51.341961    5426 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem (1123 bytes)
	I1025 20:26:51.341988    5426 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem (1679 bytes)
	I1025 20:26:51.342050    5426 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:26:51.342079    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /usr/share/ca-certificates/29162.pem
	I1025 20:26:51.342096    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:26:51.342109    5426 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem -> /usr/share/ca-certificates/2916.pem
	I1025 20:26:51.342580    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 20:26:51.360668    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 20:26:51.377155    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 20:26:51.394065    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/ingress-addon-legacy-202633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 20:26:51.410880    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 20:26:51.427557    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 20:26:51.444831    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 20:26:51.461061    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 20:26:51.477549    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /usr/share/ca-certificates/29162.pem (1708 bytes)
	I1025 20:26:51.494690    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 20:26:51.511405    5426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem --> /usr/share/ca-certificates/2916.pem (1338 bytes)
	I1025 20:26:51.527388    5426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 20:26:51.539358    5426 ssh_runner.go:195] Run: openssl version
	I1025 20:26:51.544537    5426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29162.pem && ln -fs /usr/share/ca-certificates/29162.pem /etc/ssl/certs/29162.pem"
	I1025 20:26:51.552410    5426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29162.pem
	I1025 20:26:51.556361    5426 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:26:51.556399    5426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29162.pem
	I1025 20:26:51.561256    5426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29162.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 20:26:51.569478    5426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 20:26:51.576689    5426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:26:51.580283    5426 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:26:51.580331    5426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:26:51.585262    5426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 20:26:51.592530    5426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2916.pem && ln -fs /usr/share/ca-certificates/2916.pem /etc/ssl/certs/2916.pem"
	I1025 20:26:51.599722    5426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2916.pem
	I1025 20:26:51.603322    5426 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:26:51.603362    5426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2916.pem
	I1025 20:26:51.608338    5426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2916.pem /etc/ssl/certs/51391683.0"
	I1025 20:26:51.615874    5426 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-202633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202633 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:26:51.615974    5426 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 20:26:51.636867    5426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 20:26:51.644485    5426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 20:26:51.651653    5426 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1025 20:26:51.651701    5426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 20:26:51.658723    5426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 20:26:51.658752    5426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 20:26:51.703015    5426 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1025 20:26:51.703058    5426 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 20:26:51.971240    5426 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 20:26:51.971341    5426 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 20:26:51.971435    5426 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 20:26:52.171746    5426 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 20:26:52.172458    5426 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 20:26:52.172497    5426 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 20:26:52.245054    5426 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 20:26:52.267952    5426 out.go:204]   - Generating certificates and keys ...
	I1025 20:26:52.268040    5426 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 20:26:52.268113    5426 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 20:26:52.386231    5426 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 20:26:52.645818    5426 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 20:26:52.848805    5426 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 20:26:52.952907    5426 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 20:26:53.404338    5426 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 20:26:53.404507    5426 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-202633 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 20:26:53.659594    5426 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 20:26:53.659726    5426 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-202633 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 20:26:53.843722    5426 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 20:26:53.933729    5426 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 20:26:54.279686    5426 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 20:26:54.279735    5426 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 20:26:54.427480    5426 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 20:26:54.642565    5426 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 20:26:54.836488    5426 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 20:26:54.897620    5426 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 20:26:54.898137    5426 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 20:26:54.919921    5426 out.go:204]   - Booting up control plane ...
	I1025 20:26:54.920109    5426 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 20:26:54.920310    5426 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 20:26:54.920455    5426 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 20:26:54.920594    5426 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 20:26:54.920850    5426 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 20:27:34.879539    5426 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1025 20:27:34.880568    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:27:34.880788    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:27:39.882478    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:27:39.882711    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:27:49.876670    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:27:49.876925    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:28:09.865060    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:28:09.865294    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:28:49.838649    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:28:49.838883    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:28:49.838899    5426 kubeadm.go:317] 
	I1025 20:28:49.838951    5426 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1025 20:28:49.839021    5426 kubeadm.go:317] 		timed out waiting for the condition
	I1025 20:28:49.839036    5426 kubeadm.go:317] 
	I1025 20:28:49.839069    5426 kubeadm.go:317] 	This error is likely caused by:
	I1025 20:28:49.839118    5426 kubeadm.go:317] 		- The kubelet is not running
	I1025 20:28:49.839287    5426 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 20:28:49.839301    5426 kubeadm.go:317] 
	I1025 20:28:49.839412    5426 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 20:28:49.839456    5426 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1025 20:28:49.839508    5426 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1025 20:28:49.839520    5426 kubeadm.go:317] 
	I1025 20:28:49.839638    5426 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 20:28:49.839702    5426 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 20:28:49.839713    5426 kubeadm.go:317] 
	I1025 20:28:49.839773    5426 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1025 20:28:49.839817    5426 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1025 20:28:49.839888    5426 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1025 20:28:49.839917    5426 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1025 20:28:49.839925    5426 kubeadm.go:317] 
	I1025 20:28:49.842211    5426 kubeadm.go:317] W1026 03:26:51.711187     956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 20:28:49.842279    5426 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 20:28:49.842412    5426 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
	I1025 20:28:49.842499    5426 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 20:28:49.842598    5426 kubeadm.go:317] W1026 03:26:54.913008     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 20:28:49.842694    5426 kubeadm.go:317] W1026 03:26:54.914220     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 20:28:49.842767    5426 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 20:28:49.842832    5426 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1025 20:28:49.843002    5426 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-202633 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-202633 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:26:51.711187     956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:26:54.913008     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:26:54.914220     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-202633 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-202633 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:26:51.711187     956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:26:54.913008     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:26:54.914220     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 20:28:49.843031    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 20:28:50.256775    5426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:28:50.266131    5426 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1025 20:28:50.266174    5426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 20:28:50.273036    5426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 20:28:50.273058    5426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 20:28:50.317673    5426 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1025 20:28:50.317713    5426 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 20:28:50.591418    5426 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 20:28:50.591503    5426 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 20:28:50.591587    5426 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 20:28:50.792794    5426 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 20:28:50.793427    5426 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 20:28:50.793460    5426 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 20:28:50.870353    5426 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 20:28:50.891938    5426 out.go:204]   - Generating certificates and keys ...
	I1025 20:28:50.892002    5426 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 20:28:50.892052    5426 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 20:28:50.892113    5426 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 20:28:50.892168    5426 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1025 20:28:50.892253    5426 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 20:28:50.892303    5426 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1025 20:28:50.892365    5426 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1025 20:28:50.892412    5426 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1025 20:28:50.892473    5426 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 20:28:50.892546    5426 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 20:28:50.892589    5426 kubeadm.go:317] [certs] Using the existing "sa" key
	I1025 20:28:50.892627    5426 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 20:28:50.987643    5426 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 20:28:51.069551    5426 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 20:28:51.307319    5426 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 20:28:51.608501    5426 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 20:28:51.608874    5426 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 20:28:51.651414    5426 out.go:204]   - Booting up control plane ...
	I1025 20:28:51.651566    5426 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 20:28:51.651719    5426 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 20:28:51.651835    5426 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 20:28:51.651976    5426 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 20:28:51.652284    5426 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 20:29:31.590480    5426 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1025 20:29:31.591156    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:29:31.591466    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:29:36.589166    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:29:36.589416    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:29:46.583583    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:29:46.583785    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:30:06.570841    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:30:06.571038    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:30:46.543703    5426 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 20:30:46.543949    5426 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 20:30:46.543964    5426 kubeadm.go:317] 
	I1025 20:30:46.544004    5426 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1025 20:30:46.544046    5426 kubeadm.go:317] 		timed out waiting for the condition
	I1025 20:30:46.544054    5426 kubeadm.go:317] 
	I1025 20:30:46.544092    5426 kubeadm.go:317] 	This error is likely caused by:
	I1025 20:30:46.544127    5426 kubeadm.go:317] 		- The kubelet is not running
	I1025 20:30:46.544229    5426 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 20:30:46.544237    5426 kubeadm.go:317] 
	I1025 20:30:46.544354    5426 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 20:30:46.544401    5426 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1025 20:30:46.544446    5426 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1025 20:30:46.544454    5426 kubeadm.go:317] 
	I1025 20:30:46.544568    5426 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 20:30:46.544657    5426 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 20:30:46.544666    5426 kubeadm.go:317] 
	I1025 20:30:46.544769    5426 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1025 20:30:46.544824    5426 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1025 20:30:46.544912    5426 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1025 20:30:46.544953    5426 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1025 20:30:46.544968    5426 kubeadm.go:317] 
	I1025 20:30:46.547394    5426 kubeadm.go:317] W1026 03:28:50.324939    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 20:30:46.547455    5426 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 20:30:46.547564    5426 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
	I1025 20:30:46.547658    5426 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 20:30:46.547764    5426 kubeadm.go:317] W1026 03:28:51.622552    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 20:30:46.547865    5426 kubeadm.go:317] W1026 03:28:51.623264    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 20:30:46.547936    5426 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 20:30:46.547998    5426 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1025 20:30:46.548032    5426 kubeadm.go:398] StartCluster complete in 3m54.929712633s
	I1025 20:30:46.548109    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 20:30:46.567446    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.567458    5426 logs.go:276] No container was found matching "kube-apiserver"
	I1025 20:30:46.567523    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 20:30:46.588046    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.588059    5426 logs.go:276] No container was found matching "etcd"
	I1025 20:30:46.588113    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 20:30:46.607986    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.607999    5426 logs.go:276] No container was found matching "coredns"
	I1025 20:30:46.608066    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 20:30:46.629479    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.629491    5426 logs.go:276] No container was found matching "kube-scheduler"
	I1025 20:30:46.629547    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 20:30:46.650014    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.650027    5426 logs.go:276] No container was found matching "kube-proxy"
	I1025 20:30:46.650086    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 20:30:46.671405    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.671416    5426 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1025 20:30:46.671474    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 20:30:46.691227    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.691238    5426 logs.go:276] No container was found matching "storage-provisioner"
	I1025 20:30:46.691294    5426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 20:30:46.711418    5426 logs.go:274] 0 containers: []
	W1025 20:30:46.711429    5426 logs.go:276] No container was found matching "kube-controller-manager"
	I1025 20:30:46.711437    5426 logs.go:123] Gathering logs for describe nodes ...
	I1025 20:30:46.711444    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 20:30:46.761981    5426 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 20:30:46.761991    5426 logs.go:123] Gathering logs for Docker ...
	I1025 20:30:46.761997    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1025 20:30:46.777597    5426 logs.go:123] Gathering logs for container status ...
	I1025 20:30:46.777610    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 20:30:48.825398    5426 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047754372s)
	I1025 20:30:48.825527    5426 logs.go:123] Gathering logs for kubelet ...
	I1025 20:30:48.825537    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 20:30:48.863632    5426 logs.go:123] Gathering logs for dmesg ...
	I1025 20:30:48.863645    5426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1025 20:30:48.875517    5426 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:28:50.324939    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:28:51.622552    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:28:51.623264    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 20:30:48.875535    5426 out.go:239] * 
	* 
	W1025 20:30:48.875656    5426 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:28:50.324939    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:28:51.622552    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:28:51.623264    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:28:50.324939    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:28:51.622552    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:28:51.623264    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 20:30:48.875672    5426 out.go:239] * 
	* 
	W1025 20:30:48.876266    5426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:30:48.941149    5426 out.go:177] 
	W1025 20:30:48.984292    5426 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:28:50.324939    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:28:51.622552    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:28:51.623264    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 03:28:50.324939    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 03:28:51.622552    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 03:28:51.623264    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 20:30:48.984430    5426 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 20:30:48.984507    5426 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 20:30:49.006202    5426 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-202633 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (255.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-202633 addons enable ingress --alsologtostderr -v=5
E1025 20:30:53.962006    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:31:34.922947    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-202633 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.131565516s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:30:49.169944    5754 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:30:49.170217    5754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:30:49.170223    5754 out.go:309] Setting ErrFile to fd 2...
	I1025 20:30:49.170227    5754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:30:49.170334    5754 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:30:49.192923    5754 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1025 20:30:49.217273    5754 config.go:180] Loaded profile config "ingress-addon-legacy-202633": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 20:30:49.217296    5754 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-202633"
	I1025 20:30:49.217305    5754 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-202633"
	I1025 20:30:49.217756    5754 host.go:66] Checking if "ingress-addon-legacy-202633" exists ...
	I1025 20:30:49.218352    5754 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-202633 --format={{.State.Status}}
	I1025 20:30:49.303092    5754 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1025 20:30:49.324985    5754 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1025 20:30:49.346527    5754 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I1025 20:30:49.367512    5754 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1025 20:30:49.388736    5754 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 20:30:49.388754    5754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I1025 20:30:49.388839    5754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:30:49.451527    5754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:30:49.548507    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:49.597855    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:49.597882    5754 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:49.874280    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:49.924320    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:49.924341    5754 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:50.464747    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:50.517436    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:50.517450    5754 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:51.173620    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:51.226267    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:51.226288    5754 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:52.019781    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:52.072643    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:52.072660    5754 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:53.245141    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:53.296807    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:53.296821    5754 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:55.552332    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:55.602947    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:55.602963    5754 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:57.216036    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:30:57.268840    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:30:57.268855    5754 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:00.075427    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:31:00.126426    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:00.126442    5754 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:03.953617    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:31:04.005116    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:04.005130    5754 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:11.705028    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:31:11.755390    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:11.755407    5754 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:26.393390    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:31:26.445109    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:26.445123    5754 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:54.854427    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:31:54.908006    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:31:54.908021    5754 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:18.078773    5754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 20:32:18.132074    5754 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:18.132106    5754 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-202633"
	I1025 20:32:18.156842    5754 out.go:177] * Verifying ingress addon...
	I1025 20:32:18.179899    5754 out.go:177] 
	W1025 20:32:18.201952    5754 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-202633" does not exist: client config: context "ingress-addon-legacy-202633" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-202633" does not exist: client config: context "ingress-addon-legacy-202633" does not exist]
	W1025 20:32:18.201981    5754 out.go:239] * 
	* 
	W1025 20:32:18.204996    5754 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:32:18.226737    5754 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-202633
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-202633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59",
	        "Created": "2022-10-26T03:26:46.465073966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-26T03:26:46.751006552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/hostname",
	        "HostsPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/hosts",
	        "LogPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59-json.log",
	        "Name": "/ingress-addon-legacy-202633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-202633:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-202633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6-init/diff:/var/lib/docker/overlay2/9458c76ad567886b2941fe702595331447ec81af553bd6a5e305712ba6e99816/diff:/var/lib/docker/overlay2/f360822278c606190700446c63ea52e09800bb98b4011371f467c5329ccbfcdb/diff:/var/lib/docker/overlay2/d19b2a794f1a902d2cb81e3b717a0cbc2759ad547379336883f54acfc56f55aa/diff:/var/lib/docker/overlay2/2da5878d3547c20269c7d0a0c1fe821d0477558b5c9c8c15f108d8e6a7fbefd5/diff:/var/lib/docker/overlay2/8415b06fae0ecbcf9d1229e122da7dc6adef6f37fc541fe10e296454756df8d4/diff:/var/lib/docker/overlay2/3975772ef27829e60ff7a01cf11e459d24a06dd9acff5913f6c2e8275f0531c5/diff:/var/lib/docker/overlay2/3b0582df76ce9d3b29f45dbb3cfc3ec73cbe70e9df311b1864529e3946828d33/diff:/var/lib/docker/overlay2/40719af50c76ff060d79ba1be54c32127a4e49851d7d803f27a18352dfef2832/diff:/var/lib/docker/overlay2/9ccd8153ddc1bc61cae8a0cdd511730f47016b27273ad916204d1ce66039f5c4/diff:/var/lib/docker/overlay2/a99602
f01ac24af886b8248e9900864af0fbc776a4112056a1207b27942db176/diff:/var/lib/docker/overlay2/463c08b6020caddc0bc2b869257a9d4cdff5691d606db4e4a55ae8d203039fb8/diff:/var/lib/docker/overlay2/f3f67d9be6959cfcf69b9056b7af913fae3f9e6c74bec9bacc1f23592237c735/diff:/var/lib/docker/overlay2/f41ea619a41a3987b453fc5993510cda003cef6b896512fdbcd53c39a53c364a/diff:/var/lib/docker/overlay2/cef112361ca2ae2fcde83b778143cbe8b8ce1ddd1f07f8b353b65a088d962e3e/diff:/var/lib/docker/overlay2/ea61c71c4feb5341b51268b2cda82ee1878391b66787be6b295b21684f9a9096/diff:/var/lib/docker/overlay2/a6e559d447ffc610de1597df9b3c965ecc48305f9fcb4f3b43f48d38d43b166c/diff:/var/lib/docker/overlay2/a2dfaaa99882da5754ade275243ff8f867ab1bcc6ad23f15a45c08a117f95c80/diff:/var/lib/docker/overlay2/1518b34809b05558525693674d7a73d688ac90fbe38e233f58881e9d97cd9777/diff:/var/lib/docker/overlay2/c2cb7fb0ac5638040d2c9ed2728804b603304d44690876582ea2f4d1254c0c37/diff:/var/lib/docker/overlay2/fd6cf32d9b25daa7f585a0773f058b146cbd6d48c1c9cb208d36daec316c2f1c/diff:/var/lib/d
ocker/overlay2/10669751bc9b32f9dae2dfbff977817b218d8b62efdfd852669d984939337fc4/diff:/var/lib/docker/overlay2/c9826321b7cdee6e5767fcc25ffdb9f2b170dd88683deccec16775180472e052/diff:/var/lib/docker/overlay2/93fe86f96bbd8578686f5c6e85e468c67425a15bc3468fd6160bcf4b683f7ded/diff:/var/lib/docker/overlay2/22378b0a3177562c1dc57988573177acf03ee9353f251bd90424608f6609736f/diff:/var/lib/docker/overlay2/6f9a8de4c84b855e54278f112ef385b65cf7ce83137774bd447f581f931fdba8/diff:/var/lib/docker/overlay2/75929d4024047d79d1cb07e0aa4cbe999dcfe81d92a4f19bf4e934b7c749c777/diff:/var/lib/docker/overlay2/11747eb76a2c5d4e3e52e7791ccbb44129898ae37da84c5adb31930723991822/diff:/var/lib/docker/overlay2/3d0c322f0fbeca039eb0f2ace2e48a6556860edb13c31a68056d07f644b5947c/diff:/var/lib/docker/overlay2/37e5caf2125330a396059ef67db6dd7eeabbfcc3afd90b6364bbe13a2d4763ab/diff:/var/lib/docker/overlay2/7f66f473740d4034513069c7bd4de43269d2b328f058b3fbc64868409371fd53/diff:/var/lib/docker/overlay2/e7853ca89704ef21aa7014120bcc549c1259a5d8c3ef8a5932e2a095ef5
e8000/diff:/var/lib/docker/overlay2/236b2362f06a587e036fe0814a4a9f0a20f71d0bbd18b50ac3fcb17db425944b/diff:/var/lib/docker/overlay2/50076bcff37472720dbb36d9a3a48bb0432d6948a66b414369014ef78341f6bc/diff:/var/lib/docker/overlay2/f99fb67031aec99b950ed8054f90cd9baf7bcb83c4327c55617b11bba62f9d7a/diff:/var/lib/docker/overlay2/7f4f0cde1c3401952137a79e3dcde3c4ab23a17f6389d90215259d7431664326/diff:/var/lib/docker/overlay2/e9000629b4b1d18176f36ab8e78d978815141d81647473111b9a757aa4d55c64/diff:/var/lib/docker/overlay2/c75b32d5c68e353b0c46448c7980a0bb24c76734e38c27560b4da707e6dd5b6c/diff:/var/lib/docker/overlay2/9d95b640d4231740021ecd34b6875b2237ba49aba22355bacd06b1384cbbca01/diff:/var/lib/docker/overlay2/c67258bc5b10e43cb64f062847929271782f1774217621794cc512958d39874b/diff:/var/lib/docker/overlay2/10e2fa31b1491df452294985ea0c2b6d9e91bf8af6bc6d688fa4593f1fc089ad/diff:/var/lib/docker/overlay2/e3db9f814154a501672e127c2ef7363bb31a226f80a9d212f5cfdd9429fa486f/diff:/var/lib/docker/overlay2/866c15e4f6bd7392ddbc6f3e1eae5d8cc90fba
5017067d8c9133857eae97bdcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-202633",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-202633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-202633",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-202633",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-202633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9ae6205c31aa55548bf4e00c82cdca431f856bfbf02571c1189d00869e566cc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a9ae6205c31a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-202633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91325e527488",
	                        "ingress-addon-legacy-202633"
	                    ],
	                    "NetworkID": "b10fee9b2757f39de814a9b62ed35c2cfa98accbc46f1e9f0fd061973622fe8c",
	                    "EndpointID": "5062bdef03423fea63c9b189b111cfbf41bd5476e38debaa7113febfa2213051",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-202633 -n ingress-addon-legacy-202633
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-202633 -n ingress-addon-legacy-202633: exit status 6 (412.47881ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:32:18.721557    5845 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-202633" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-202633" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-202633 addons enable ingress-dns --alsologtostderr -v=5
E1025 20:32:56.846155    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-202633 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.045688901s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:32:18.780898    5855 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:32:18.781216    5855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:32:18.781221    5855 out.go:309] Setting ErrFile to fd 2...
	I1025 20:32:18.781225    5855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:32:18.781335    5855 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:32:18.802732    5855 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1025 20:32:18.823466    5855 config.go:180] Loaded profile config "ingress-addon-legacy-202633": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 20:32:18.823497    5855 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-202633"
	I1025 20:32:18.823516    5855 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-202633"
	I1025 20:32:18.824051    5855 host.go:66] Checking if "ingress-addon-legacy-202633" exists ...
	I1025 20:32:18.824913    5855 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-202633 --format={{.State.Status}}
	I1025 20:32:18.909992    5855 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1025 20:32:18.932162    5855 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1025 20:32:18.953987    5855 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 20:32:18.954031    5855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1025 20:32:18.954153    5855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-202633
	I1025 20:32:19.017866    5855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/ingress-addon-legacy-202633/id_rsa Username:docker}
	I1025 20:32:19.114308    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:19.162505    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:19.162525    5855 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:19.440998    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:19.490703    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:19.490727    5855 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:20.032101    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:20.081224    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:20.081239    5855 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:20.737301    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:20.788684    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:20.788700    5855 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:21.580152    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:21.630892    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:21.630906    5855 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:22.801622    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:22.855227    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:22.855242    5855 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:25.110728    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:25.163172    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:25.163187    5855 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:26.775912    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:26.825717    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:26.825731    5855 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:29.630627    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:29.681474    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:29.681487    5855 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:33.507682    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:33.559564    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:33.559577    5855 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:41.259449    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:41.309526    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:41.309544    5855 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:55.945551    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:32:55.997491    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:32:55.997507    5855 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:33:24.406639    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:33:24.458013    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:33:24.458029    5855 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:33:47.627509    5855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 20:33:47.681543    5855 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 20:33:47.703639    5855 out.go:177] 
	W1025 20:33:47.725448    5855 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1025 20:33:47.725473    5855 out.go:239] * 
	* 
	W1025 20:33:47.728509    5855 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:33:47.750044    5855 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-202633
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-202633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59",
	        "Created": "2022-10-26T03:26:46.465073966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-26T03:26:46.751006552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/hostname",
	        "HostsPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/hosts",
	        "LogPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59-json.log",
	        "Name": "/ingress-addon-legacy-202633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-202633:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-202633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6-init/diff:/var/lib/docker/overlay2/9458c76ad567886b2941fe702595331447ec81af553bd6a5e305712ba6e99816/diff:/var/lib/docker/overlay2/f360822278c606190700446c63ea52e09800bb98b4011371f467c5329ccbfcdb/diff:/var/lib/docker/overlay2/d19b2a794f1a902d2cb81e3b717a0cbc2759ad547379336883f54acfc56f55aa/diff:/var/lib/docker/overlay2/2da5878d3547c20269c7d0a0c1fe821d0477558b5c9c8c15f108d8e6a7fbefd5/diff:/var/lib/docker/overlay2/8415b06fae0ecbcf9d1229e122da7dc6adef6f37fc541fe10e296454756df8d4/diff:/var/lib/docker/overlay2/3975772ef27829e60ff7a01cf11e459d24a06dd9acff5913f6c2e8275f0531c5/diff:/var/lib/docker/overlay2/3b0582df76ce9d3b29f45dbb3cfc3ec73cbe70e9df311b1864529e3946828d33/diff:/var/lib/docker/overlay2/40719af50c76ff060d79ba1be54c32127a4e49851d7d803f27a18352dfef2832/diff:/var/lib/docker/overlay2/9ccd8153ddc1bc61cae8a0cdd511730f47016b27273ad916204d1ce66039f5c4/diff:/var/lib/docker/overlay2/a99602
f01ac24af886b8248e9900864af0fbc776a4112056a1207b27942db176/diff:/var/lib/docker/overlay2/463c08b6020caddc0bc2b869257a9d4cdff5691d606db4e4a55ae8d203039fb8/diff:/var/lib/docker/overlay2/f3f67d9be6959cfcf69b9056b7af913fae3f9e6c74bec9bacc1f23592237c735/diff:/var/lib/docker/overlay2/f41ea619a41a3987b453fc5993510cda003cef6b896512fdbcd53c39a53c364a/diff:/var/lib/docker/overlay2/cef112361ca2ae2fcde83b778143cbe8b8ce1ddd1f07f8b353b65a088d962e3e/diff:/var/lib/docker/overlay2/ea61c71c4feb5341b51268b2cda82ee1878391b66787be6b295b21684f9a9096/diff:/var/lib/docker/overlay2/a6e559d447ffc610de1597df9b3c965ecc48305f9fcb4f3b43f48d38d43b166c/diff:/var/lib/docker/overlay2/a2dfaaa99882da5754ade275243ff8f867ab1bcc6ad23f15a45c08a117f95c80/diff:/var/lib/docker/overlay2/1518b34809b05558525693674d7a73d688ac90fbe38e233f58881e9d97cd9777/diff:/var/lib/docker/overlay2/c2cb7fb0ac5638040d2c9ed2728804b603304d44690876582ea2f4d1254c0c37/diff:/var/lib/docker/overlay2/fd6cf32d9b25daa7f585a0773f058b146cbd6d48c1c9cb208d36daec316c2f1c/diff:/var/lib/d
ocker/overlay2/10669751bc9b32f9dae2dfbff977817b218d8b62efdfd852669d984939337fc4/diff:/var/lib/docker/overlay2/c9826321b7cdee6e5767fcc25ffdb9f2b170dd88683deccec16775180472e052/diff:/var/lib/docker/overlay2/93fe86f96bbd8578686f5c6e85e468c67425a15bc3468fd6160bcf4b683f7ded/diff:/var/lib/docker/overlay2/22378b0a3177562c1dc57988573177acf03ee9353f251bd90424608f6609736f/diff:/var/lib/docker/overlay2/6f9a8de4c84b855e54278f112ef385b65cf7ce83137774bd447f581f931fdba8/diff:/var/lib/docker/overlay2/75929d4024047d79d1cb07e0aa4cbe999dcfe81d92a4f19bf4e934b7c749c777/diff:/var/lib/docker/overlay2/11747eb76a2c5d4e3e52e7791ccbb44129898ae37da84c5adb31930723991822/diff:/var/lib/docker/overlay2/3d0c322f0fbeca039eb0f2ace2e48a6556860edb13c31a68056d07f644b5947c/diff:/var/lib/docker/overlay2/37e5caf2125330a396059ef67db6dd7eeabbfcc3afd90b6364bbe13a2d4763ab/diff:/var/lib/docker/overlay2/7f66f473740d4034513069c7bd4de43269d2b328f058b3fbc64868409371fd53/diff:/var/lib/docker/overlay2/e7853ca89704ef21aa7014120bcc549c1259a5d8c3ef8a5932e2a095ef5
e8000/diff:/var/lib/docker/overlay2/236b2362f06a587e036fe0814a4a9f0a20f71d0bbd18b50ac3fcb17db425944b/diff:/var/lib/docker/overlay2/50076bcff37472720dbb36d9a3a48bb0432d6948a66b414369014ef78341f6bc/diff:/var/lib/docker/overlay2/f99fb67031aec99b950ed8054f90cd9baf7bcb83c4327c55617b11bba62f9d7a/diff:/var/lib/docker/overlay2/7f4f0cde1c3401952137a79e3dcde3c4ab23a17f6389d90215259d7431664326/diff:/var/lib/docker/overlay2/e9000629b4b1d18176f36ab8e78d978815141d81647473111b9a757aa4d55c64/diff:/var/lib/docker/overlay2/c75b32d5c68e353b0c46448c7980a0bb24c76734e38c27560b4da707e6dd5b6c/diff:/var/lib/docker/overlay2/9d95b640d4231740021ecd34b6875b2237ba49aba22355bacd06b1384cbbca01/diff:/var/lib/docker/overlay2/c67258bc5b10e43cb64f062847929271782f1774217621794cc512958d39874b/diff:/var/lib/docker/overlay2/10e2fa31b1491df452294985ea0c2b6d9e91bf8af6bc6d688fa4593f1fc089ad/diff:/var/lib/docker/overlay2/e3db9f814154a501672e127c2ef7363bb31a226f80a9d212f5cfdd9429fa486f/diff:/var/lib/docker/overlay2/866c15e4f6bd7392ddbc6f3e1eae5d8cc90fba
5017067d8c9133857eae97bdcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-202633",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-202633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-202633",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-202633",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-202633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9ae6205c31aa55548bf4e00c82cdca431f856bfbf02571c1189d00869e566cc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a9ae6205c31a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-202633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91325e527488",
	                        "ingress-addon-legacy-202633"
	                    ],
	                    "NetworkID": "b10fee9b2757f39de814a9b62ed35c2cfa98accbc46f1e9f0fd061973622fe8c",
	                    "EndpointID": "5062bdef03423fea63c9b189b111cfbf41bd5476e38debaa7113febfa2213051",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-202633 -n ingress-addon-legacy-202633
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-202633 -n ingress-addon-legacy-202633: exit status 6 (412.607214ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:33:48.247490    5937 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-202633" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-202633" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-202633
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-202633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59",
	        "Created": "2022-10-26T03:26:46.465073966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-26T03:26:46.751006552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/hostname",
	        "HostsPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/hosts",
	        "LogPath": "/var/lib/docker/containers/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59/91325e527488b1597303a687c4b2b1307e00f34cef1aadb837f1e7304edc3b59-json.log",
	        "Name": "/ingress-addon-legacy-202633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-202633:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-202633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6-init/diff:/var/lib/docker/overlay2/9458c76ad567886b2941fe702595331447ec81af553bd6a5e305712ba6e99816/diff:/var/lib/docker/overlay2/f360822278c606190700446c63ea52e09800bb98b4011371f467c5329ccbfcdb/diff:/var/lib/docker/overlay2/d19b2a794f1a902d2cb81e3b717a0cbc2759ad547379336883f54acfc56f55aa/diff:/var/lib/docker/overlay2/2da5878d3547c20269c7d0a0c1fe821d0477558b5c9c8c15f108d8e6a7fbefd5/diff:/var/lib/docker/overlay2/8415b06fae0ecbcf9d1229e122da7dc6adef6f37fc541fe10e296454756df8d4/diff:/var/lib/docker/overlay2/3975772ef27829e60ff7a01cf11e459d24a06dd9acff5913f6c2e8275f0531c5/diff:/var/lib/docker/overlay2/3b0582df76ce9d3b29f45dbb3cfc3ec73cbe70e9df311b1864529e3946828d33/diff:/var/lib/docker/overlay2/40719af50c76ff060d79ba1be54c32127a4e49851d7d803f27a18352dfef2832/diff:/var/lib/docker/overlay2/9ccd8153ddc1bc61cae8a0cdd511730f47016b27273ad916204d1ce66039f5c4/diff:/var/lib/docker/overlay2/a99602
f01ac24af886b8248e9900864af0fbc776a4112056a1207b27942db176/diff:/var/lib/docker/overlay2/463c08b6020caddc0bc2b869257a9d4cdff5691d606db4e4a55ae8d203039fb8/diff:/var/lib/docker/overlay2/f3f67d9be6959cfcf69b9056b7af913fae3f9e6c74bec9bacc1f23592237c735/diff:/var/lib/docker/overlay2/f41ea619a41a3987b453fc5993510cda003cef6b896512fdbcd53c39a53c364a/diff:/var/lib/docker/overlay2/cef112361ca2ae2fcde83b778143cbe8b8ce1ddd1f07f8b353b65a088d962e3e/diff:/var/lib/docker/overlay2/ea61c71c4feb5341b51268b2cda82ee1878391b66787be6b295b21684f9a9096/diff:/var/lib/docker/overlay2/a6e559d447ffc610de1597df9b3c965ecc48305f9fcb4f3b43f48d38d43b166c/diff:/var/lib/docker/overlay2/a2dfaaa99882da5754ade275243ff8f867ab1bcc6ad23f15a45c08a117f95c80/diff:/var/lib/docker/overlay2/1518b34809b05558525693674d7a73d688ac90fbe38e233f58881e9d97cd9777/diff:/var/lib/docker/overlay2/c2cb7fb0ac5638040d2c9ed2728804b603304d44690876582ea2f4d1254c0c37/diff:/var/lib/docker/overlay2/fd6cf32d9b25daa7f585a0773f058b146cbd6d48c1c9cb208d36daec316c2f1c/diff:/var/lib/d
ocker/overlay2/10669751bc9b32f9dae2dfbff977817b218d8b62efdfd852669d984939337fc4/diff:/var/lib/docker/overlay2/c9826321b7cdee6e5767fcc25ffdb9f2b170dd88683deccec16775180472e052/diff:/var/lib/docker/overlay2/93fe86f96bbd8578686f5c6e85e468c67425a15bc3468fd6160bcf4b683f7ded/diff:/var/lib/docker/overlay2/22378b0a3177562c1dc57988573177acf03ee9353f251bd90424608f6609736f/diff:/var/lib/docker/overlay2/6f9a8de4c84b855e54278f112ef385b65cf7ce83137774bd447f581f931fdba8/diff:/var/lib/docker/overlay2/75929d4024047d79d1cb07e0aa4cbe999dcfe81d92a4f19bf4e934b7c749c777/diff:/var/lib/docker/overlay2/11747eb76a2c5d4e3e52e7791ccbb44129898ae37da84c5adb31930723991822/diff:/var/lib/docker/overlay2/3d0c322f0fbeca039eb0f2ace2e48a6556860edb13c31a68056d07f644b5947c/diff:/var/lib/docker/overlay2/37e5caf2125330a396059ef67db6dd7eeabbfcc3afd90b6364bbe13a2d4763ab/diff:/var/lib/docker/overlay2/7f66f473740d4034513069c7bd4de43269d2b328f058b3fbc64868409371fd53/diff:/var/lib/docker/overlay2/e7853ca89704ef21aa7014120bcc549c1259a5d8c3ef8a5932e2a095ef5
e8000/diff:/var/lib/docker/overlay2/236b2362f06a587e036fe0814a4a9f0a20f71d0bbd18b50ac3fcb17db425944b/diff:/var/lib/docker/overlay2/50076bcff37472720dbb36d9a3a48bb0432d6948a66b414369014ef78341f6bc/diff:/var/lib/docker/overlay2/f99fb67031aec99b950ed8054f90cd9baf7bcb83c4327c55617b11bba62f9d7a/diff:/var/lib/docker/overlay2/7f4f0cde1c3401952137a79e3dcde3c4ab23a17f6389d90215259d7431664326/diff:/var/lib/docker/overlay2/e9000629b4b1d18176f36ab8e78d978815141d81647473111b9a757aa4d55c64/diff:/var/lib/docker/overlay2/c75b32d5c68e353b0c46448c7980a0bb24c76734e38c27560b4da707e6dd5b6c/diff:/var/lib/docker/overlay2/9d95b640d4231740021ecd34b6875b2237ba49aba22355bacd06b1384cbbca01/diff:/var/lib/docker/overlay2/c67258bc5b10e43cb64f062847929271782f1774217621794cc512958d39874b/diff:/var/lib/docker/overlay2/10e2fa31b1491df452294985ea0c2b6d9e91bf8af6bc6d688fa4593f1fc089ad/diff:/var/lib/docker/overlay2/e3db9f814154a501672e127c2ef7363bb31a226f80a9d212f5cfdd9429fa486f/diff:/var/lib/docker/overlay2/866c15e4f6bd7392ddbc6f3e1eae5d8cc90fba
5017067d8c9133857eae97bdcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/550a7551385334e090cd833df206c98ffba8e077ccd48e2c22a7aaeba8026ba6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-202633",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-202633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-202633",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-202633",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-202633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9ae6205c31aa55548bf4e00c82cdca431f856bfbf02571c1189d00869e566cc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a9ae6205c31a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-202633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91325e527488",
	                        "ingress-addon-legacy-202633"
	                    ],
	                    "NetworkID": "b10fee9b2757f39de814a9b62ed35c2cfa98accbc46f1e9f0fd061973622fe8c",
	                    "EndpointID": "5062bdef03423fea63c9b189b111cfbf41bd5476e38debaa7113febfa2213051",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-202633 -n ingress-addon-legacy-202633
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-202633 -n ingress-addon-legacy-202633: exit status 6 (416.464808ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:33:48.727914    5949 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-202633" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-202633" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (185.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-203818 --wait=true -v=8 --alsologtostderr --driver=docker 
E1025 20:45:12.434709    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:45:12.971530    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 20:46:36.029943    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-203818 --wait=true -v=8 --alsologtostderr --driver=docker : exit status 80 (3m0.870247816s)

                                                
                                                
-- stdout --
	* [multinode-203818] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-203818 in cluster multinode-203818
	* Pulling base image ...
	* Restarting existing docker container for "multinode-203818" ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-203818-m02 in cluster multinode-203818
	* Pulling base image ...
	* Restarting existing docker container for "multinode-203818-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:43:36.797301    9122 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:43:36.797493    9122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:43:36.797498    9122 out.go:309] Setting ErrFile to fd 2...
	I1025 20:43:36.797502    9122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:43:36.797615    9122 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:43:36.798058    9122 out.go:303] Setting JSON to false
	I1025 20:43:36.812595    9122 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2585,"bootTime":1666753231,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:43:36.812705    9122 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:43:36.835738    9122 out.go:177] * [multinode-203818] minikube v1.27.1 on Darwin 12.6
	I1025 20:43:36.879617    9122 notify.go:220] Checking for updates...
	I1025 20:43:36.901235    9122 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:43:36.922539    9122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:43:36.944550    9122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:43:36.991233    9122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:43:37.012549    9122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:43:37.034916    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:43:37.035511    9122 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:43:37.102805    9122 docker.go:137] docker version: linux-20.10.17
	I1025 20:43:37.102942    9122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:43:37.230560    9122 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:47 SystemTime:2022-10-26 03:43:37.179691583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:43:37.274370    9122 out.go:177] * Using the docker driver based on existing profile
	I1025 20:43:37.296166    9122 start.go:282] selected driver: docker
	I1025 20:43:37.296209    9122 start.go:808] validating driver "docker" against &{Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:43:37.296456    9122 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:43:37.296680    9122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:43:37.427024    9122 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:47 SystemTime:2022-10-26 03:43:37.375162167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:43:37.429216    9122 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 20:43:37.429247    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:43:37.429254    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:43:37.429281    9122 start_flags.go:317] config:
	{Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registr
y-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:43:37.472854    9122 out.go:177] * Starting control plane node multinode-203818 in cluster multinode-203818
	I1025 20:43:37.493762    9122 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:43:37.516097    9122 out.go:177] * Pulling base image ...
	I1025 20:43:37.559861    9122 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:43:37.559935    9122 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:43:37.559962    9122 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 20:43:37.560021    9122 cache.go:57] Caching tarball of preloaded images
	I1025 20:43:37.560881    9122 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 20:43:37.560977    9122 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 20:43:37.561415    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:43:37.624892    9122 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:43:37.624911    9122 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:43:37.624926    9122 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:43:37.624970    9122 start.go:364] acquiring machines lock for multinode-203818: {Name:mk88e10ba1d84a7a598add48978caab9a0493783 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:43:37.625062    9122 start.go:368] acquired machines lock for "multinode-203818" in 55.292µs
	I1025 20:43:37.625081    9122 start.go:96] Skipping create...Using existing machine configuration
	I1025 20:43:37.625090    9122 fix.go:55] fixHost starting: 
	I1025 20:43:37.625347    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:43:37.686750    9122 fix.go:103] recreateIfNeeded on multinode-203818: state=Stopped err=<nil>
	W1025 20:43:37.686786    9122 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 20:43:37.708893    9122 out.go:177] * Restarting existing docker container for "multinode-203818" ...
	I1025 20:43:37.730490    9122 cli_runner.go:164] Run: docker start multinode-203818
	I1025 20:43:38.065539    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:43:38.129208    9122 kic.go:415] container "multinode-203818" state is running.
	I1025 20:43:38.129785    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:43:38.239205    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:43:38.239606    9122 machine.go:88] provisioning docker machine ...
	I1025 20:43:38.239629    9122 ubuntu.go:169] provisioning hostname "multinode-203818"
	I1025 20:43:38.239720    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:38.304990    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:38.305182    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:38.305195    9122 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-203818 && echo "multinode-203818" | sudo tee /etc/hostname
	I1025 20:43:38.444923    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-203818
	
	I1025 20:43:38.445006    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:38.510155    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:38.510312    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:38.510326    9122 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-203818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-203818/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-203818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 20:43:38.631522    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:43:38.631547    9122 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/14956-2080/.minikube CaCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/14956-2080/.minikube}
	I1025 20:43:38.631580    9122 ubuntu.go:177] setting up certificates
	I1025 20:43:38.631588    9122 provision.go:83] configureAuth start
	I1025 20:43:38.631648    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:43:38.697977    9122 provision.go:138] copyHostCerts
	I1025 20:43:38.698021    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:43:38.698088    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem, removing ...
	I1025 20:43:38.698098    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:43:38.698197    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem (1078 bytes)
	I1025 20:43:38.698393    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:43:38.698425    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem, removing ...
	I1025 20:43:38.698430    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:43:38.698490    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem (1123 bytes)
	I1025 20:43:38.698599    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:43:38.698629    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem, removing ...
	I1025 20:43:38.698633    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:43:38.698691    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem (1679 bytes)
	I1025 20:43:38.698820    9122 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem org=jenkins.multinode-203818 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-203818]
	I1025 20:43:38.920081    9122 provision.go:172] copyRemoteCerts
	I1025 20:43:38.920143    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 20:43:38.920190    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:38.986346    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:39.075651    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 20:43:39.075765    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 20:43:39.092201    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 20:43:39.092269    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 20:43:39.108732    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 20:43:39.108790    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 20:43:39.124975    9122 provision.go:86] duration metric: configureAuth took 493.374297ms
	I1025 20:43:39.124987    9122 ubuntu.go:193] setting minikube options for container-runtime
	I1025 20:43:39.125145    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:43:39.125204    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.188102    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:39.188237    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:39.188246    9122 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 20:43:39.316932    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 20:43:39.316947    9122 ubuntu.go:71] root file system type: overlay
	I1025 20:43:39.317079    9122 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 20:43:39.317144    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.380637    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:39.380818    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:39.380865    9122 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 20:43:39.518577    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 20:43:39.518648    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.580675    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:39.580814    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:39.580827    9122 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 20:43:39.715195    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:43:39.715213    9122 machine.go:91] provisioned docker machine in 1.475598823s
	I1025 20:43:39.715222    9122 start.go:300] post-start starting for "multinode-203818" (driver="docker")
	I1025 20:43:39.715228    9122 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 20:43:39.715309    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 20:43:39.715355    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.777880    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:39.867298    9122 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 20:43:39.870682    9122 command_runner.go:130] > NAME="Ubuntu"
	I1025 20:43:39.870694    9122 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1025 20:43:39.870698    9122 command_runner.go:130] > ID=ubuntu
	I1025 20:43:39.870702    9122 command_runner.go:130] > ID_LIKE=debian
	I1025 20:43:39.870706    9122 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1025 20:43:39.870709    9122 command_runner.go:130] > VERSION_ID="20.04"
	I1025 20:43:39.870713    9122 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 20:43:39.870717    9122 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 20:43:39.870722    9122 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 20:43:39.870729    9122 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 20:43:39.870733    9122 command_runner.go:130] > VERSION_CODENAME=focal
	I1025 20:43:39.870740    9122 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1025 20:43:39.870782    9122 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 20:43:39.870794    9122 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 20:43:39.870806    9122 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 20:43:39.870811    9122 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 20:43:39.870818    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/addons for local assets ...
	I1025 20:43:39.870910    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/files for local assets ...
	I1025 20:43:39.871059    9122 filesync.go:149] local asset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> 29162.pem in /etc/ssl/certs
	I1025 20:43:39.871064    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /etc/ssl/certs/29162.pem
	I1025 20:43:39.871200    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 20:43:39.877925    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:43:39.894568    9122 start.go:303] post-start completed in 179.336217ms
	I1025 20:43:39.894636    9122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:43:39.894679    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.956826    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:40.049863    9122 command_runner.go:130] > 6%
	I1025 20:43:40.049929    9122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:43:40.054081    9122 command_runner.go:130] > 92G
	I1025 20:43:40.054431    9122 fix.go:57] fixHost completed within 2.42934018s
	I1025 20:43:40.054442    9122 start.go:83] releasing machines lock for "multinode-203818", held for 2.429369424s
	I1025 20:43:40.054516    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:43:40.117705    9122 ssh_runner.go:195] Run: systemctl --version
	I1025 20:43:40.117706    9122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 20:43:40.117767    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:40.117804    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:40.184020    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:40.184253    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:40.316302    9122 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 20:43:40.316335    9122 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I1025 20:43:40.316354    9122 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I1025 20:43:40.316481    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 20:43:40.323968    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1025 20:43:40.336177    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:43:40.403011    9122 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 20:43:40.482035    9122 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 20:43:40.490847    9122 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 20:43:40.490858    9122 command_runner.go:130] > [Unit]
	I1025 20:43:40.490868    9122 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 20:43:40.490873    9122 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 20:43:40.490877    9122 command_runner.go:130] > BindsTo=containerd.service
	I1025 20:43:40.490885    9122 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 20:43:40.490890    9122 command_runner.go:130] > Wants=network-online.target
	I1025 20:43:40.490896    9122 command_runner.go:130] > Requires=docker.socket
	I1025 20:43:40.490900    9122 command_runner.go:130] > StartLimitBurst=3
	I1025 20:43:40.490903    9122 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 20:43:40.490906    9122 command_runner.go:130] > [Service]
	I1025 20:43:40.490909    9122 command_runner.go:130] > Type=notify
	I1025 20:43:40.490912    9122 command_runner.go:130] > Restart=on-failure
	I1025 20:43:40.490919    9122 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 20:43:40.490927    9122 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 20:43:40.490933    9122 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 20:43:40.490939    9122 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 20:43:40.490945    9122 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 20:43:40.490952    9122 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 20:43:40.490959    9122 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 20:43:40.490972    9122 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 20:43:40.490979    9122 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 20:43:40.490983    9122 command_runner.go:130] > ExecStart=
	I1025 20:43:40.490994    9122 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 20:43:40.490999    9122 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 20:43:40.491006    9122 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 20:43:40.491011    9122 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 20:43:40.491014    9122 command_runner.go:130] > LimitNOFILE=infinity
	I1025 20:43:40.491018    9122 command_runner.go:130] > LimitNPROC=infinity
	I1025 20:43:40.491035    9122 command_runner.go:130] > LimitCORE=infinity
	I1025 20:43:40.491044    9122 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 20:43:40.491050    9122 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 20:43:40.491053    9122 command_runner.go:130] > TasksMax=infinity
	I1025 20:43:40.491057    9122 command_runner.go:130] > TimeoutStartSec=0
	I1025 20:43:40.491062    9122 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 20:43:40.491070    9122 command_runner.go:130] > Delegate=yes
	I1025 20:43:40.491078    9122 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 20:43:40.491081    9122 command_runner.go:130] > KillMode=process
	I1025 20:43:40.491089    9122 command_runner.go:130] > [Install]
	I1025 20:43:40.491093    9122 command_runner.go:130] > WantedBy=multi-user.target
	I1025 20:43:40.491402    9122 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 20:43:40.491456    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 20:43:40.500811    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 20:43:40.512645    9122 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:43:40.512656    9122 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:43:40.513547    9122 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 20:43:40.580296    9122 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 20:43:40.648806    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:43:40.714674    9122 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 20:43:40.953289    9122 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 20:43:41.019114    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:43:41.084840    9122 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 20:43:41.093931    9122 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 20:43:41.093997    9122 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 20:43:41.097488    9122 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 20:43:41.097498    9122 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 20:43:41.097502    9122 command_runner.go:130] > Device: 97h/151d	Inode: 115         Links: 1
	I1025 20:43:41.097507    9122 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 20:43:41.097513    9122 command_runner.go:130] > Access: 2022-10-26 03:43:40.411250876 +0000
	I1025 20:43:41.097517    9122 command_runner.go:130] > Modify: 2022-10-26 03:43:40.411250876 +0000
	I1025 20:43:41.097522    9122 command_runner.go:130] > Change: 2022-10-26 03:43:40.412250876 +0000
	I1025 20:43:41.097525    9122 command_runner.go:130] >  Birth: -
	I1025 20:43:41.097649    9122 start.go:472] Will wait 60s for crictl version
	I1025 20:43:41.097690    9122 ssh_runner.go:195] Run: sudo crictl version
	I1025 20:43:41.123286    9122 command_runner.go:130] > Version:  0.1.0
	I1025 20:43:41.123297    9122 command_runner.go:130] > RuntimeName:  docker
	I1025 20:43:41.123301    9122 command_runner.go:130] > RuntimeVersion:  20.10.18
	I1025 20:43:41.123316    9122 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1025 20:43:41.125594    9122 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 20:43:41.125654    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:43:41.150619    9122 command_runner.go:130] > 20.10.18
	I1025 20:43:41.152806    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:43:41.177603    9122 command_runner.go:130] > 20.10.18
	I1025 20:43:41.225401    9122 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 20:43:41.225613    9122 cli_runner.go:164] Run: docker exec -t multinode-203818 dig +short host.docker.internal
	I1025 20:43:41.344949    9122 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 20:43:41.345051    9122 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 20:43:41.349231    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:43:41.358537    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:41.421091    9122 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:43:41.421171    9122 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 20:43:41.442027    9122 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1025 20:43:41.442045    9122 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1025 20:43:41.442050    9122 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1025 20:43:41.442063    9122 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1025 20:43:41.442069    9122 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1025 20:43:41.442075    9122 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1025 20:43:41.442081    9122 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1025 20:43:41.442088    9122 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1025 20:43:41.442092    9122 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1025 20:43:41.442096    9122 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 20:43:41.442101    9122 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1025 20:43:41.444156    9122 docker.go:612] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1025 20:43:41.444172    9122 docker.go:543] Images already preloaded, skipping extraction
	I1025 20:43:41.444241    9122 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 20:43:41.463199    9122 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1025 20:43:41.463214    9122 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1025 20:43:41.463224    9122 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1025 20:43:41.463230    9122 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1025 20:43:41.463235    9122 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1025 20:43:41.463240    9122 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1025 20:43:41.463243    9122 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1025 20:43:41.463249    9122 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1025 20:43:41.463254    9122 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1025 20:43:41.463258    9122 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 20:43:41.463261    9122 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1025 20:43:41.465231    9122 docker.go:612] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1025 20:43:41.465249    9122 cache_images.go:84] Images are preloaded, skipping loading
	I1025 20:43:41.465324    9122 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 20:43:41.528201    9122 command_runner.go:130] > systemd
	I1025 20:43:41.530250    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:43:41.530265    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:43:41.530300    9122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 20:43:41.530317    9122 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-203818 NodeName:multinode-203818 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 20:43:41.530436    9122 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-203818"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 20:43:41.530534    9122 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-203818 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 20:43:41.530599    9122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 20:43:41.537380    9122 command_runner.go:130] > kubeadm
	I1025 20:43:41.537395    9122 command_runner.go:130] > kubectl
	I1025 20:43:41.537402    9122 command_runner.go:130] > kubelet
	I1025 20:43:41.538254    9122 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 20:43:41.538303    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 20:43:41.545156    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I1025 20:43:41.557366    9122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 20:43:41.569642    9122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I1025 20:43:41.581820    9122 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 20:43:41.585658    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:43:41.594782    9122 certs.go:54] Setting up /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818 for IP: 192.168.58.2
	I1025 20:43:41.594881    9122 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key
	I1025 20:43:41.594927    9122 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key
	I1025 20:43:41.595008    9122 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key
	I1025 20:43:41.595062    9122 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.key.cee25041
	I1025 20:43:41.595115    9122 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.key
	I1025 20:43:41.595122    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 20:43:41.595140    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 20:43:41.595154    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 20:43:41.595169    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 20:43:41.595183    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 20:43:41.595198    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 20:43:41.595211    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 20:43:41.595226    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 20:43:41.595328    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem (1338 bytes)
	W1025 20:43:41.595362    9122 certs.go:384] ignoring /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916_empty.pem, impossibly tiny 0 bytes
	I1025 20:43:41.595369    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 20:43:41.595398    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem (1078 bytes)
	I1025 20:43:41.595426    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem (1123 bytes)
	I1025 20:43:41.595450    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem (1679 bytes)
	I1025 20:43:41.595515    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:43:41.595545    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem -> /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.595561    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.595575    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.596017    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 20:43:41.612413    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 20:43:41.629316    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 20:43:41.646225    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 20:43:41.662889    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 20:43:41.679620    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 20:43:41.696067    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 20:43:41.712685    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 20:43:41.729013    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem --> /usr/share/ca-certificates/2916.pem (1338 bytes)
	I1025 20:43:41.745682    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /usr/share/ca-certificates/29162.pem (1708 bytes)
	I1025 20:43:41.761943    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 20:43:41.778134    9122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 20:43:41.791312    9122 ssh_runner.go:195] Run: openssl version
	I1025 20:43:41.796102    9122 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1025 20:43:41.796312    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 20:43:41.804092    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.824272    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.824421    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.824464    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.829180    9122 command_runner.go:130] > b5213941
	I1025 20:43:41.829376    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 20:43:41.836757    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2916.pem && ln -fs /usr/share/ca-certificates/2916.pem /etc/ssl/certs/2916.pem"
	I1025 20:43:41.844364    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.847905    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.848011    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.848048    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.852725    9122 command_runner.go:130] > 51391683
	I1025 20:43:41.853102    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2916.pem /etc/ssl/certs/51391683.0"
	I1025 20:43:41.860674    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29162.pem && ln -fs /usr/share/ca-certificates/29162.pem /etc/ssl/certs/29162.pem"
	I1025 20:43:41.868283    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.871936    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.872081    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.872123    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.877007    9122 command_runner.go:130] > 3ec20f2e
	I1025 20:43:41.877337    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29162.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 20:43:41.884680    9122 kubeadm.go:396] StartCluster: {Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:43:41.884786    9122 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 20:43:41.905710    9122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 20:43:41.912543    9122 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1025 20:43:41.912555    9122 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1025 20:43:41.912560    9122 command_runner.go:130] > /var/lib/minikube/etcd:
	I1025 20:43:41.912564    9122 command_runner.go:130] > member
	I1025 20:43:41.913350    9122 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1025 20:43:41.913363    9122 kubeadm.go:627] restartCluster start
	I1025 20:43:41.913401    9122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 20:43:41.919901    9122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:41.919960    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:41.984585    9122 kubeconfig.go:135] verify returned: extract IP: "multinode-203818" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:43:41.984694    9122 kubeconfig.go:146] "multinode-203818" context is missing from /Users/jenkins/minikube-integration/14956-2080/kubeconfig - will repair!
	I1025 20:43:41.984907    9122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/kubeconfig: {Name:mke147bd0f9c02680989e4cfb1c572f71a0430b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:43:41.985383    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:43:41.985562    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:43:41.985845    9122 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 20:43:41.986018    9122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 20:43:41.993770    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:41.993835    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.002126    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.204259    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.204447    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.215734    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.404272    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.404480    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.415415    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.604273    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.604449    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.614604    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.804287    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.804524    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.815730    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.004270    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.004421    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.015319    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.204300    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.204451    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.215085    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.404292    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.404451    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.415218    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.604276    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.604422    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.615380    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.804391    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.804481    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.814752    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.004285    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.004483    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.015694    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.204068    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.204246    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.214623    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.404410    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.404509    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.414514    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.604238    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.604447    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.615090    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.804212    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.804384    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.815087    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.004239    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:45.004378    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:45.015089    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.015098    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:45.015139    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:45.022900    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.022911    9122 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1025 20:43:45.022917    9122 kubeadm.go:1114] stopping kube-system containers ...
	I1025 20:43:45.022975    9122 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 20:43:45.045339    9122 command_runner.go:130] > a76713468a8e
	I1025 20:43:45.045351    9122 command_runner.go:130] > bf7b5ebb864d
	I1025 20:43:45.045355    9122 command_runner.go:130] > 6e75fc801378
	I1025 20:43:45.045358    9122 command_runner.go:130] > c5b570db3f97
	I1025 20:43:45.045361    9122 command_runner.go:130] > c08d84877f86
	I1025 20:43:45.045364    9122 command_runner.go:130] > d412a631e4ae
	I1025 20:43:45.045367    9122 command_runner.go:130] > 901030c09673
	I1025 20:43:45.045371    9122 command_runner.go:130] > fa258b141e90
	I1025 20:43:45.045376    9122 command_runner.go:130] > 3494771f98f1
	I1025 20:43:45.045381    9122 command_runner.go:130] > acf347f03ed9
	I1025 20:43:45.045385    9122 command_runner.go:130] > c0ffc4ed686c
	I1025 20:43:45.045388    9122 command_runner.go:130] > 29a55c918cc0
	I1025 20:43:45.045391    9122 command_runner.go:130] > 6578e02f60a4
	I1025 20:43:45.045394    9122 command_runner.go:130] > 34b369462e06
	I1025 20:43:45.045398    9122 command_runner.go:130] > aa702be3519c
	I1025 20:43:45.045402    9122 command_runner.go:130] > 6e35a55843e1
	I1025 20:43:45.045407    9122 command_runner.go:130] > 67c78a683e4d
	I1025 20:43:45.045416    9122 command_runner.go:130] > 7f82edcd8e10
	I1025 20:43:45.045420    9122 command_runner.go:130] > 66606cdef38a
	I1025 20:43:45.045436    9122 command_runner.go:130] > b2980ae0c352
	I1025 20:43:45.045440    9122 command_runner.go:130] > d6944494206f
	I1025 20:43:45.045443    9122 command_runner.go:130] > 01e7c971f29b
	I1025 20:43:45.045446    9122 command_runner.go:130] > ed3dab775831
	I1025 20:43:45.045449    9122 command_runner.go:130] > 113916e4ec18
	I1025 20:43:45.045452    9122 command_runner.go:130] > d8ecd8887c5d
	I1025 20:43:45.045456    9122 command_runner.go:130] > 87ee196d2cd9
	I1025 20:43:45.045459    9122 command_runner.go:130] > 6b8aa122335e
	I1025 20:43:45.045462    9122 command_runner.go:130] > 311c6f77b2dd
	I1025 20:43:45.045466    9122 command_runner.go:130] > e12714297d31
	I1025 20:43:45.045469    9122 command_runner.go:130] > 2ec5e29e095a
	I1025 20:43:45.045472    9122 command_runner.go:130] > b79b4f06c21d
	I1025 20:43:45.045476    9122 command_runner.go:130] > e8f6e8673bc0
	I1025 20:43:45.047546    9122 docker.go:444] Stopping containers: [a76713468a8e bf7b5ebb864d 6e75fc801378 c5b570db3f97 c08d84877f86 d412a631e4ae 901030c09673 fa258b141e90 3494771f98f1 acf347f03ed9 c0ffc4ed686c 29a55c918cc0 6578e02f60a4 34b369462e06 aa702be3519c 6e35a55843e1 67c78a683e4d 7f82edcd8e10 66606cdef38a b2980ae0c352 d6944494206f 01e7c971f29b ed3dab775831 113916e4ec18 d8ecd8887c5d 87ee196d2cd9 6b8aa122335e 311c6f77b2dd e12714297d31 2ec5e29e095a b79b4f06c21d e8f6e8673bc0]
	I1025 20:43:45.047617    9122 ssh_runner.go:195] Run: docker stop a76713468a8e bf7b5ebb864d 6e75fc801378 c5b570db3f97 c08d84877f86 d412a631e4ae 901030c09673 fa258b141e90 3494771f98f1 acf347f03ed9 c0ffc4ed686c 29a55c918cc0 6578e02f60a4 34b369462e06 aa702be3519c 6e35a55843e1 67c78a683e4d 7f82edcd8e10 66606cdef38a b2980ae0c352 d6944494206f 01e7c971f29b ed3dab775831 113916e4ec18 d8ecd8887c5d 87ee196d2cd9 6b8aa122335e 311c6f77b2dd e12714297d31 2ec5e29e095a b79b4f06c21d e8f6e8673bc0
	I1025 20:43:45.069687    9122 command_runner.go:130] > a76713468a8e
	I1025 20:43:45.069714    9122 command_runner.go:130] > bf7b5ebb864d
	I1025 20:43:45.070077    9122 command_runner.go:130] > 6e75fc801378
	I1025 20:43:45.070084    9122 command_runner.go:130] > c5b570db3f97
	I1025 20:43:45.070089    9122 command_runner.go:130] > c08d84877f86
	I1025 20:43:45.070097    9122 command_runner.go:130] > d412a631e4ae
	I1025 20:43:45.070708    9122 command_runner.go:130] > 901030c09673
	I1025 20:43:45.070714    9122 command_runner.go:130] > fa258b141e90
	I1025 20:43:45.070717    9122 command_runner.go:130] > 3494771f98f1
	I1025 20:43:45.070722    9122 command_runner.go:130] > acf347f03ed9
	I1025 20:43:45.070726    9122 command_runner.go:130] > c0ffc4ed686c
	I1025 20:43:45.070729    9122 command_runner.go:130] > 29a55c918cc0
	I1025 20:43:45.070733    9122 command_runner.go:130] > 6578e02f60a4
	I1025 20:43:45.070736    9122 command_runner.go:130] > 34b369462e06
	I1025 20:43:45.070740    9122 command_runner.go:130] > aa702be3519c
	I1025 20:43:45.070743    9122 command_runner.go:130] > 6e35a55843e1
	I1025 20:43:45.070747    9122 command_runner.go:130] > 67c78a683e4d
	I1025 20:43:45.070750    9122 command_runner.go:130] > 7f82edcd8e10
	I1025 20:43:45.070754    9122 command_runner.go:130] > 66606cdef38a
	I1025 20:43:45.070758    9122 command_runner.go:130] > b2980ae0c352
	I1025 20:43:45.070762    9122 command_runner.go:130] > d6944494206f
	I1025 20:43:45.070765    9122 command_runner.go:130] > 01e7c971f29b
	I1025 20:43:45.070769    9122 command_runner.go:130] > ed3dab775831
	I1025 20:43:45.070772    9122 command_runner.go:130] > 113916e4ec18
	I1025 20:43:45.070776    9122 command_runner.go:130] > d8ecd8887c5d
	I1025 20:43:45.070783    9122 command_runner.go:130] > 87ee196d2cd9
	I1025 20:43:45.070787    9122 command_runner.go:130] > 6b8aa122335e
	I1025 20:43:45.070791    9122 command_runner.go:130] > 311c6f77b2dd
	I1025 20:43:45.070795    9122 command_runner.go:130] > e12714297d31
	I1025 20:43:45.070798    9122 command_runner.go:130] > 2ec5e29e095a
	I1025 20:43:45.070801    9122 command_runner.go:130] > b79b4f06c21d
	I1025 20:43:45.070806    9122 command_runner.go:130] > e8f6e8673bc0
	I1025 20:43:45.073246    9122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 20:43:45.082964    9122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 20:43:45.089843    9122 command_runner.go:130] > -rw------- 1 root root 5643 Oct 26 03:38 /etc/kubernetes/admin.conf
	I1025 20:43:45.089863    9122 command_runner.go:130] > -rw------- 1 root root 5652 Oct 26 03:41 /etc/kubernetes/controller-manager.conf
	I1025 20:43:45.089870    9122 command_runner.go:130] > -rw------- 1 root root 2003 Oct 26 03:38 /etc/kubernetes/kubelet.conf
	I1025 20:43:45.089877    9122 command_runner.go:130] > -rw------- 1 root root 5600 Oct 26 03:41 /etc/kubernetes/scheduler.conf
	I1025 20:43:45.090504    9122 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 26 03:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 26 03:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Oct 26 03:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 26 03:41 /etc/kubernetes/scheduler.conf
	
	I1025 20:43:45.090555    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 20:43:45.096749    9122 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1025 20:43:45.097416    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 20:43:45.103841    9122 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1025 20:43:45.104474    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 20:43:45.111377    9122 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.111425    9122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 20:43:45.117786    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 20:43:45.124321    9122 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.124361    9122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 20:43:45.130791    9122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 20:43:45.137836    9122 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 20:43:45.137850    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:45.177937    9122 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 20:43:45.177950    9122 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1025 20:43:45.177964    9122 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1025 20:43:45.177971    9122 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 20:43:45.177980    9122 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1025 20:43:45.177987    9122 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1025 20:43:45.178228    9122 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1025 20:43:45.178392    9122 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1025 20:43:45.178685    9122 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1025 20:43:45.178845    9122 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 20:43:45.179211    9122 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 20:43:45.179708    9122 command_runner.go:130] > [certs] Using the existing "sa" key
	I1025 20:43:45.182267    9122 command_runner.go:130] ! W1026 03:43:45.177650    1166 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:45.182282    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:45.222772    9122 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 20:43:45.376929    9122 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1025 20:43:45.491773    9122 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1025 20:43:45.579278    9122 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 20:43:45.908244    9122 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 20:43:45.912776    9122 command_runner.go:130] ! W1026 03:43:45.223201    1175 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:45.912796    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:45.964483    9122 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 20:43:45.964990    9122 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 20:43:45.964999    9122 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 20:43:46.039526    9122 command_runner.go:130] ! W1026 03:43:45.955639    1198 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:46.039552    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:46.078197    9122 command_runner.go:130] ! W1026 03:43:46.082911    1233 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:46.089924    9122 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 20:43:46.089937    9122 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 20:43:46.089942    9122 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 20:43:46.089948    9122 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 20:43:46.089962    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:46.169856    9122 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 20:43:46.175403    9122 command_runner.go:130] ! W1026 03:43:46.169866    1246 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:46.175426    9122 api_server.go:51] waiting for apiserver process to appear ...
	I1025 20:43:46.175470    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:46.690790    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:47.189243    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:47.688717    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:48.189268    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:48.199232    9122 command_runner.go:130] > 1821
	I1025 20:43:48.200039    9122 api_server.go:71] duration metric: took 2.024610468s to wait for apiserver process to appear ...
	I1025 20:43:48.200060    9122 api_server.go:87] waiting for apiserver healthz status ...
	I1025 20:43:48.200088    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:50.534225    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 20:43:50.534242    9122 api_server.go:102] status: https://127.0.0.1:51345/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 20:43:51.034939    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:51.042629    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 20:43:51.042650    9122 api_server.go:102] status: https://127.0.0.1:51345/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 20:43:51.534457    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:51.540222    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 20:43:51.540236    9122 api_server.go:102] status: https://127.0.0.1:51345/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 20:43:52.036395    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:52.043580    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 200:
	ok
	I1025 20:43:52.043641    9122 round_trippers.go:463] GET https://127.0.0.1:51345/version
	I1025 20:43:52.043649    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:52.043658    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:52.043670    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:52.049885    9122 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1025 20:43:52.049894    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:52.049900    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:52.049905    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:52.049913    9122 round_trippers.go:580]     Content-Length: 263
	I1025 20:43:52.049918    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:52 GMT
	I1025 20:43:52.049922    9122 round_trippers.go:580]     Audit-Id: c4eba5a2-7038-404b-a89e-7e6dd65fcffc
	I1025 20:43:52.049927    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:52.049932    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:52.049949    9122 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 20:43:52.049993    9122 api_server.go:140] control plane version: v1.25.3
	I1025 20:43:52.050000    9122 api_server.go:130] duration metric: took 3.849933686s to wait for apiserver health ...
	I1025 20:43:52.050016    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:43:52.050024    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:43:52.071610    9122 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 20:43:52.092506    9122 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 20:43:52.098345    9122 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 20:43:52.098356    9122 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I1025 20:43:52.098361    9122 command_runner.go:130] > Device: 8eh/142d	Inode: 1185203     Links: 1
	I1025 20:43:52.098366    9122 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 20:43:52.098372    9122 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I1025 20:43:52.098377    9122 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I1025 20:43:52.098381    9122 command_runner.go:130] > Change: 2022-10-26 03:18:20.497780245 +0000
	I1025 20:43:52.098384    9122 command_runner.go:130] >  Birth: -
	I1025 20:43:52.098411    9122 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 20:43:52.098418    9122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1025 20:43:52.113650    9122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 20:43:52.990277    9122 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1025 20:43:52.991914    9122 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1025 20:43:52.994083    9122 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1025 20:43:53.011276    9122 command_runner.go:130] > daemonset.apps/kindnet configured
	I1025 20:43:53.056810    9122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 20:43:53.056909    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:43:53.056921    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.056929    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.056937    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.062417    9122 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 20:43:53.062435    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.062442    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.062448    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.062461    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.062469    9122 round_trippers.go:580]     Audit-Id: 3431891b-4a28-47d9-907c-16c0df9e0448
	I1025 20:43:53.062475    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.062481    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.063966    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"963"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"736","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85214 chars]
	I1025 20:43:53.067470    9122 system_pods.go:59] 12 kube-system pods found
	I1025 20:43:53.067677    9122 system_pods.go:61] "coredns-565d847f94-tvhv6" [c89eabb7-66d0-469a-8966-ceeb6f9b215e] Running
	I1025 20:43:53.067686    9122 system_pods.go:61] "etcd-multinode-203818" [49b2d2ea-40ad-40fa-bab3-93930d3e9d10] Running
	I1025 20:43:53.067695    9122 system_pods.go:61] "kindnet-8xvrw" [a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07] Running
	I1025 20:43:53.067701    9122 system_pods.go:61] "kindnet-l9tx2" [0bc050f8-3916-4ad8-9eca-ec2de9c7c4d9] Running
	I1025 20:43:53.067710    9122 system_pods.go:61] "kindnet-q9qv5" [d5252527-eabb-4b78-9901-bfb15f51fc1b] Running
	I1025 20:43:53.067716    9122 system_pods.go:61] "kube-apiserver-multinode-203818" [e95d0701-3478-4373-8740-541b9481b83a] Running
	I1025 20:43:53.067743    9122 system_pods.go:61] "kube-controller-manager-multinode-203818" [cade2617-19dd-49f7-940e-d92e7b847fb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 20:43:53.067755    9122 system_pods.go:61] "kube-proxy-48p2l" [cf96a572-bbca-4af2-bd3e-7d377772cef4] Running
	I1025 20:43:53.067767    9122 system_pods.go:61] "kube-proxy-9j45q" [f3494f97-7b4b-4072-83ad-9a8308ed6c9b] Running
	I1025 20:43:53.067773    9122 system_pods.go:61] "kube-proxy-j799s" [281b0817-ab50-4c73-b20e-0774fcc2f594] Running
	I1025 20:43:53.067778    9122 system_pods.go:61] "kube-scheduler-multinode-203818" [352db6de-72fe-4aaa-b7b7-79881ea11d8e] Running
	I1025 20:43:53.067784    9122 system_pods.go:61] "storage-provisioner" [93c13130-1e73-4433-b82f-b565797df5c6] Running
	I1025 20:43:53.067789    9122 system_pods.go:74] duration metric: took 10.963464ms to wait for pod list to return data ...
	I1025 20:43:53.067798    9122 node_conditions.go:102] verifying NodePressure condition ...
	I1025 20:43:53.067893    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes
	I1025 20:43:53.067900    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.067908    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.067916    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.072655    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:43:53.072670    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.072676    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.072680    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.072684    9122 round_trippers.go:580]     Audit-Id: a6bc24c7-01c1-4fa1-8d3c-2039042e9cd9
	I1025 20:43:53.072688    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.072693    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.072697    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.072779    9122 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"967"},"items":[{"metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10904 chars]
	I1025 20:43:53.073263    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:43:53.073275    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:43:53.073286    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:43:53.073289    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:43:53.073293    9122 node_conditions.go:105] duration metric: took 5.491155ms to run NodePressure ...
	I1025 20:43:53.073311    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:53.286253    9122 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1025 20:43:53.363042    9122 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1025 20:43:53.366573    9122 command_runner.go:130] ! W1026 03:43:53.178030    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:53.366597    9122 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1025 20:43:53.366648    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1025 20:43:53.366653    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.366660    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.366665    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.369944    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.369959    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.369966    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.369972    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.369981    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.369991    9122 round_trippers.go:580]     Audit-Id: 2a3fa272-d820-41a4-affd-f8c87d65facc
	I1025 20:43:53.370002    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.370023    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.370411    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"978"},"items":[{"metadata":{"name":"etcd-multinode-203818","namespace":"kube-system","uid":"49b2d2ea-40ad-40fa-bab3-93930d3e9d10","resourceVersion":"755","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.mirror":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.seen":"2022-10-26T03:38:46.168169599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30656 chars]
	I1025 20:43:53.371145    9122 kubeadm.go:778] kubelet initialised
	I1025 20:43:53.371154    9122 kubeadm.go:779] duration metric: took 4.550039ms waiting for restarted kubelet to initialise ...
	I1025 20:43:53.371162    9122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:43:53.371206    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:43:53.371212    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.371218    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.371224    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.375030    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.375061    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.375085    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.375100    9122 round_trippers.go:580]     Audit-Id: 8b143cb5-388f-464d-ab9e-60d90887fd97
	I1025 20:43:53.375108    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.375116    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.375121    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.375126    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.376208    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"978"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"736","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85422 chars]
	I1025 20:43:53.378159    9122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.378205    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/coredns-565d847f94-tvhv6
	I1025 20:43:53.378209    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.378216    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.378221    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.380399    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.380412    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.380418    9122 round_trippers.go:580]     Audit-Id: 4fdb392f-304a-4596-a459-e6158d8b61c7
	I1025 20:43:53.380422    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.380427    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.380431    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.380435    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.380444    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.380510    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"736","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6550 chars]
	I1025 20:43:53.380822    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.380828    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.380834    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.380839    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.383196    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.383214    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.383223    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.383228    9122 round_trippers.go:580]     Audit-Id: ca0c1a4d-14d6-4384-a917-cf9615c00f84
	I1025 20:43:53.383234    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.383242    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.383248    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.383252    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.383325    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.383549    9122 pod_ready.go:92] pod "coredns-565d847f94-tvhv6" in "kube-system" namespace has status "Ready":"True"
	I1025 20:43:53.383558    9122 pod_ready.go:81] duration metric: took 5.387904ms waiting for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.383565    9122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.383602    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/etcd-multinode-203818
	I1025 20:43:53.383607    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.383612    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.383617    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.386546    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.386564    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.386575    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.386581    9122 round_trippers.go:580]     Audit-Id: 78d854a5-38e6-4a8f-8810-f7171a549d85
	I1025 20:43:53.386587    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.386593    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.386597    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.386602    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.386811    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-203818","namespace":"kube-system","uid":"49b2d2ea-40ad-40fa-bab3-93930d3e9d10","resourceVersion":"755","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.mirror":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.seen":"2022-10-26T03:38:46.168169599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I1025 20:43:53.387057    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.387063    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.387070    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.387075    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.390084    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.390098    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.390104    9122 round_trippers.go:580]     Audit-Id: ec50719b-7494-4fba-b10b-1d01c774bc65
	I1025 20:43:53.390115    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.390121    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.390152    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.390160    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.390171    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.390226    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.390460    9122 pod_ready.go:92] pod "etcd-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:43:53.390468    9122 pod_ready.go:81] duration metric: took 6.89885ms waiting for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.390479    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.390514    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-203818
	I1025 20:43:53.390519    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.390525    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.390530    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.392746    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.392759    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.392767    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.392772    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.392777    9122 round_trippers.go:580]     Audit-Id: 0ba34327-c5b6-4453-b9c9-31d0c71759dd
	I1025 20:43:53.392785    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.392792    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.392796    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.392854    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-203818","namespace":"kube-system","uid":"e95d0701-3478-4373-8740-541b9481b83a","resourceVersion":"770","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.mirror":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.seen":"2022-10-26T03:38:46.168180019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I1025 20:43:53.393137    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.393144    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.393149    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.393154    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.395956    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.395967    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.395973    9122 round_trippers.go:580]     Audit-Id: 29b0b768-5e72-4f53-946d-7cf1f1365fbf
	I1025 20:43:53.395978    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.395983    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.395990    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.395995    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.395999    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.396054    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.396271    9122 pod_ready.go:92] pod "kube-apiserver-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:43:53.396280    9122 pod_ready.go:81] duration metric: took 5.796829ms waiting for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.396288    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.396326    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:53.396332    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.396340    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.396347    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.398938    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.398949    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.398955    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.398959    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.398964    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.398970    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.398976    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.398980    9122 round_trippers.go:580]     Audit-Id: fd02bf37-a7df-4c97-8efe-d3b50984e148
	I1025 20:43:53.399749    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:53.457302    9122 request.go:614] Waited for 57.096509ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.457364    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.457387    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.457395    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.457402    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.460682    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.460697    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.460703    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.460708    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.460713    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.460718    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.460726    9122 round_trippers.go:580]     Audit-Id: a8a8d202-3f30-4f93-b61a-c231ed5569e7
	I1025 20:43:53.460734    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.460916    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.961492    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:53.961514    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.961527    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.961537    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.965232    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.965245    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.965252    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.965259    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.965267    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.965274    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.965280    9122 round_trippers.go:580]     Audit-Id: 41eb336b-b11f-4509-a1ab-5198dca2b4b5
	I1025 20:43:53.965286    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.965362    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:53.965693    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.965699    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.965705    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.965710    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.967762    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.967771    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.967776    9122 round_trippers.go:580]     Audit-Id: ecb78efb-7be8-49c8-82f4-2cc96c45dcb2
	I1025 20:43:53.967781    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.967786    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.967790    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.967795    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.967800    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.967847    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:54.461524    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:54.461536    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.461543    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.461548    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.463495    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:54.463510    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.463516    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.463521    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.463527    9122 round_trippers.go:580]     Audit-Id: 0a310735-d98a-4186-9a26-45f55b2f3f03
	I1025 20:43:54.463544    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.463553    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.463558    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.463793    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:54.464086    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:54.464092    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.464098    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.464103    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.466055    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:54.466064    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.466069    9122 round_trippers.go:580]     Audit-Id: 2f518151-1bb6-42e0-b71e-97ee41a0aaca
	I1025 20:43:54.466074    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.466079    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.466084    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.466089    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.466096    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.466303    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:54.961378    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:54.961394    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.961403    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.961410    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.964219    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:54.964230    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.964236    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.964245    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.964250    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.964258    9122 round_trippers.go:580]     Audit-Id: f0ceec54-c4ca-46fb-8618-70b87a51e52c
	I1025 20:43:54.964263    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.964267    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.964324    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:54.964612    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:54.964619    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.964624    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.964629    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.966386    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:54.966395    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.966400    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.966405    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.966410    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.966415    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.966419    9122 round_trippers.go:580]     Audit-Id: ede5a4a8-73fd-4d37-8afb-bc40e3d9dcfe
	I1025 20:43:54.966424    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.966465    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:55.461365    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:55.461379    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.461389    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.461395    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.463717    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:55.463726    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.463731    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.463736    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.463740    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.463745    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.463750    9122 round_trippers.go:580]     Audit-Id: 35a37718-c105-4cb2-aa8f-a8f9721d26d5
	I1025 20:43:55.463757    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.463814    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:55.464095    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:55.464101    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.464107    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.464112    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.466041    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:55.466052    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.466066    9122 round_trippers.go:580]     Audit-Id: 9fb7cdbb-3f15-46d4-b54c-befdda4b3b6f
	I1025 20:43:55.466077    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.466083    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.466088    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.466093    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.466101    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.466145    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:55.466334    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:43:55.961336    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:55.961360    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.961372    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.961384    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.965044    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:55.965071    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.965084    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.965095    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.965102    9122 round_trippers.go:580]     Audit-Id: 87b19661-d1c5-480b-b477-f744f86d0038
	I1025 20:43:55.965145    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.965162    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.965169    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.965491    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:55.965784    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:55.965790    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.965796    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.965801    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.967775    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:55.967784    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.967789    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.967794    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.967799    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.967804    9122 round_trippers.go:580]     Audit-Id: c498106a-b32b-4e93-a5ad-f5b83d705344
	I1025 20:43:55.967808    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.967812    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.967955    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:56.461835    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:56.461849    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.461858    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.461864    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.464548    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:56.464557    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.464563    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.464568    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.464573    9122 round_trippers.go:580]     Audit-Id: 964a3895-50e7-4eff-b652-d06f37a9ce6c
	I1025 20:43:56.464578    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.464582    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.464587    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.464656    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:56.464935    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:56.464941    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.464946    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.464952    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.466842    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:56.466861    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.466869    9122 round_trippers.go:580]     Audit-Id: 029e1131-2571-4ed8-b933-39c6cbebeae5
	I1025 20:43:56.466881    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.466887    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.466893    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.466901    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.466907    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.466955    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:56.962237    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:56.962252    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.962261    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.962268    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.964923    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:56.964933    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.964939    9122 round_trippers.go:580]     Audit-Id: 51c35e46-86d7-455e-b050-2f053483af87
	I1025 20:43:56.964943    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.964948    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.964953    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.964958    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.964962    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.965018    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:56.965316    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:56.965322    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.965328    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.965347    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.967413    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:56.967422    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.967428    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.967436    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.967462    9122 round_trippers.go:580]     Audit-Id: 1689ee5e-f470-4fdf-9fc4-1dc1c286ddd1
	I1025 20:43:56.967473    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.967478    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.967483    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.967561    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:57.463260    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:57.463281    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.463293    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.463303    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.466861    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:57.466883    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.466892    9122 round_trippers.go:580]     Audit-Id: b85859cb-e30d-4ea7-a0fc-eb2d3d867a19
	I1025 20:43:57.466900    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.466906    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.466913    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.466919    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.466926    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.467027    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:57.467414    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:57.467420    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.467426    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.467431    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.469746    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:57.469756    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.469761    9122 round_trippers.go:580]     Audit-Id: e8ffdc76-b96d-4721-9503-c93fc4b988f0
	I1025 20:43:57.469766    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.469771    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.469776    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.469781    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.469786    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.469945    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:57.470129    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:43:57.961601    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:57.961612    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.961619    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.961624    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.964142    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:57.964152    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.964160    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.964165    9122 round_trippers.go:580]     Audit-Id: b9ab7209-0573-4b04-abab-e473d11b4cf1
	I1025 20:43:57.964181    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.964190    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.964195    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.964217    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.964288    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:57.964583    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:57.964590    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.964596    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.964603    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.966543    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:57.966552    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.966557    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.966562    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.966567    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.966571    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.966576    9122 round_trippers.go:580]     Audit-Id: 9829fced-039d-43c2-b2ea-ab699f805e0e
	I1025 20:43:57.966581    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.966619    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:58.461462    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:58.461482    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.461494    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.461504    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.464821    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:58.464834    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.464846    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.464852    9122 round_trippers.go:580]     Audit-Id: 249f9fc7-a9e3-40c0-84c6-6dc2a96b282c
	I1025 20:43:58.464856    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.464861    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.464866    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.464873    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.464928    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:58.465214    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:58.465221    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.465231    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.465241    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.467149    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:58.467157    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.467162    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.467167    9122 round_trippers.go:580]     Audit-Id: aad96f15-1792-4205-ba82-af459c86c8fe
	I1025 20:43:58.467172    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.467178    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.467186    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.467191    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.467236    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:58.961305    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:58.961317    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.961324    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.961329    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.964070    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:58.964080    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.964085    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.964089    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.964094    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.964098    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.964103    9122 round_trippers.go:580]     Audit-Id: 336f7c59-7d06-4863-98a4-f811b2e8df4c
	I1025 20:43:58.964108    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.964191    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:58.964467    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:58.964473    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.964479    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.964485    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.966449    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:58.966457    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.966463    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.966469    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.966473    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.966478    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.966483    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.966488    9122 round_trippers.go:580]     Audit-Id: 8ed1dc65-c0b2-4440-b163-34412c6e174b
	I1025 20:43:58.966526    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:59.462970    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:59.462991    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.463003    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.463014    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.466901    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:59.466916    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.466924    9122 round_trippers.go:580]     Audit-Id: 66b08f05-35e9-48ae-9313-bb27d4e2ad23
	I1025 20:43:59.466930    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.466936    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.466943    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.466949    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.466956    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.467052    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:59.467429    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:59.467437    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.467445    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.467462    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.469701    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:59.469711    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.469716    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.469720    9122 round_trippers.go:580]     Audit-Id: b1eeaf14-ed27-44ca-a8cf-316c24d04607
	I1025 20:43:59.469727    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.469732    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.469738    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.469742    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.469955    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:59.470139    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:43:59.961810    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:59.961835    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.961847    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.961856    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.965965    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:43:59.965979    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.965987    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.966005    9122 round_trippers.go:580]     Audit-Id: 54207af3-5f85-4ce7-964d-a9c85e5a970e
	I1025 20:43:59.966017    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.966024    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.966037    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.966047    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.966128    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:59.966412    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:59.966421    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.966427    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.966432    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.968741    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:59.968754    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.968759    9122 round_trippers.go:580]     Audit-Id: 7db5888e-9633-4963-94fd-197911bb08a7
	I1025 20:43:59.968768    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.968774    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.968778    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.968783    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.968787    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.968841    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:00.461285    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:00.461299    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.461307    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.461314    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.464136    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:00.464147    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.464153    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.464158    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.464163    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.464169    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.464174    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.464179    9122 round_trippers.go:580]     Audit-Id: 991b99c8-4fe3-45ad-b46a-2a2389711711
	I1025 20:44:00.464232    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:00.464513    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:00.464519    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.464527    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.464532    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.466385    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:00.466394    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.466399    9122 round_trippers.go:580]     Audit-Id: a15a2656-0fd2-45d9-bdff-abbae2596997
	I1025 20:44:00.466405    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.466409    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.466414    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.466419    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.466424    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.466465    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:00.961580    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:00.961604    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.961616    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.961626    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.965765    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:00.965780    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.965790    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.965805    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.965822    9122 round_trippers.go:580]     Audit-Id: 4b9d442f-8fcb-458a-90ab-3b21d2f26e7c
	I1025 20:44:00.965835    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.965844    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.965855    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.965968    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:00.966370    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:00.966377    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.966383    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.966388    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.968187    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:00.968196    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.968202    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.968208    9122 round_trippers.go:580]     Audit-Id: bcf49b7b-a200-4cc0-aa3e-253d15d5b05e
	I1025 20:44:00.968212    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.968217    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.968222    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.968227    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.968273    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:01.461333    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:01.461352    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.461363    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.461373    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.464924    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:01.464936    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.464942    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.464946    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.464950    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.464956    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.464962    9122 round_trippers.go:580]     Audit-Id: a23e61bf-936f-484e-96ff-9428299b51b3
	I1025 20:44:01.464967    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.465027    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:01.465313    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:01.465319    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.465324    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.465330    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.467145    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:01.467155    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.467160    9122 round_trippers.go:580]     Audit-Id: c8039d72-4573-4ac7-8e7d-0f64758c8379
	I1025 20:44:01.467166    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.467171    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.467177    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.467182    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.467186    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.467227    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:01.963442    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:01.963463    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.963476    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.963486    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.967329    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:01.967345    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.967354    9122 round_trippers.go:580]     Audit-Id: 13167eca-388f-4409-9d7e-42e52b15e595
	I1025 20:44:01.967363    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.967374    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.967380    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.967387    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.967396    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.967502    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:01.967849    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:01.967855    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.967861    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.967866    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.969482    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:01.969494    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.969505    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.969510    9122 round_trippers.go:580]     Audit-Id: 6d302772-7b14-4ad1-8f78-94e5218e789a
	I1025 20:44:01.969515    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.969520    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.969525    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.969529    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.969572    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:01.969758    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:44:02.461937    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:02.461957    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.461970    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.461980    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.465874    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:02.465889    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.465898    9122 round_trippers.go:580]     Audit-Id: 86b59cd3-9b42-402c-8a5b-ece04ddf7125
	I1025 20:44:02.465904    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.465911    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.465917    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.465924    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.465930    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.466006    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:02.466371    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:02.466382    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.466390    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.466397    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.468592    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:02.468601    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.468607    9122 round_trippers.go:580]     Audit-Id: fc29c905-e689-4edf-8bf7-3f2c81ffc63b
	I1025 20:44:02.468612    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.468622    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.468627    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.468631    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.468636    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.468679    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:02.962196    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:02.962219    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.962231    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.962241    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.965788    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:02.965837    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.965858    9122 round_trippers.go:580]     Audit-Id: 006f8565-7ed0-486e-9ed7-1fd5455319c1
	I1025 20:44:02.965873    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.965881    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.965886    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.965890    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.965895    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.965953    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:02.966229    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:02.966236    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.966242    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.966247    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.967937    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:02.967946    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.967952    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.967957    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.967961    9122 round_trippers.go:580]     Audit-Id: 4face073-3799-4d8c-9a0b-6fabe078c341
	I1025 20:44:02.967966    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.967971    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.967975    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.968013    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:03.462838    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:03.462857    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.462869    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.462878    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.466393    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:03.466407    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.466415    9122 round_trippers.go:580]     Audit-Id: fde3f4f0-6128-472d-a448-fc0221434fe2
	I1025 20:44:03.466421    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.466428    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.466434    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.466441    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.466447    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.466524    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:03.466887    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:03.466895    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.466903    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.466910    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.468927    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:03.468937    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.468942    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.468947    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.468952    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.468956    9122 round_trippers.go:580]     Audit-Id: 25632417-1aef-41a0-8d2c-6ada72d5c0b9
	I1025 20:44:03.468961    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.468966    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.469316    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:03.963038    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:03.963059    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.963073    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.963083    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.966750    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:03.966761    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.966767    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.966773    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.966780    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.966786    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.966793    9122 round_trippers.go:580]     Audit-Id: cbd132f3-e41f-4599-ab6d-735d939b1f7b
	I1025 20:44:03.966801    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.966925    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:03.967208    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:03.967214    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.967220    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.967226    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.968942    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:03.968952    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.968958    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.968965    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.968971    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.968975    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.968980    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.968984    9122 round_trippers.go:580]     Audit-Id: 900bf0a6-9fd6-42ad-8b1f-6e43686107e5
	I1025 20:44:03.969324    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:04.461415    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:04.461437    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.461449    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.461460    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.465110    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:04.465125    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.465132    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.465138    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.465145    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.465151    9122 round_trippers.go:580]     Audit-Id: d3788f9d-8d66-4c07-865d-de92c8b3635f
	I1025 20:44:04.465158    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.465164    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.465253    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:04.465638    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:04.465644    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.465650    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.465655    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.467348    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:04.467356    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.467362    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.467366    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.467371    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.467376    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.467380    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.467385    9122 round_trippers.go:580]     Audit-Id: ae9eba9c-20c2-4171-a1be-2e2d6e8e5d72
	I1025 20:44:04.467751    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:04.467950    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:44:04.963282    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:04.963302    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.963315    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.963325    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.967113    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:04.967129    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.967136    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.967142    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.967150    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.967156    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.967162    9122 round_trippers.go:580]     Audit-Id: d008f2d1-7681-4294-a001-aa8c186d4667
	I1025 20:44:04.967168    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.967235    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:04.967608    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:04.967616    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.967624    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.967631    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.969472    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:04.969482    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.969487    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.969493    9122 round_trippers.go:580]     Audit-Id: 703866cb-a18e-4a21-903c-d55f5944f9a0
	I1025 20:44:04.969497    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.969502    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.969506    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.969512    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.969692    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:05.461384    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:05.461404    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.461416    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.461426    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.465122    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:05.465135    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.465142    9122 round_trippers.go:580]     Audit-Id: 8955308e-66f0-4870-b6c5-5973365cf0c7
	I1025 20:44:05.465148    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.465155    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.465164    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.465171    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.465178    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.465316    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:05.465586    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:05.465592    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.465598    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.465603    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.467497    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:05.467507    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.467512    9122 round_trippers.go:580]     Audit-Id: 01184904-aba8-45fa-88f9-442e36d07d09
	I1025 20:44:05.467517    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.467522    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.467527    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.467531    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.467535    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.467573    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:05.961745    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:05.961769    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.961781    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.961792    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.966293    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:05.966308    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.966316    9122 round_trippers.go:580]     Audit-Id: 206f4229-6b93-4a29-b883-577cca3247f0
	I1025 20:44:05.966324    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.966332    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.966338    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.966345    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.966351    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.966456    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:05.966758    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:05.966765    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.966771    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.966776    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.968770    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:05.968777    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.968782    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.968787    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.968792    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.968797    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.968801    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.968806    9122 round_trippers.go:580]     Audit-Id: 1e4bdbf6-359e-4202-bc6d-834ac2a87612
	I1025 20:44:05.968841    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.463166    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:06.463187    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.463200    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.463210    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.467743    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:06.467761    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.467770    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.467779    9122 round_trippers.go:580]     Audit-Id: 3a140dfc-bcd1-410d-a4be-bb895b7b9a1d
	I1025 20:44:06.467787    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.467802    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.467828    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.467834    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.467909    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:06.468199    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.468206    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.468212    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.468217    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.470127    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.470139    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.470145    9122 round_trippers.go:580]     Audit-Id: cfd0edee-6370-41cb-abd4-deff4cced744
	I1025 20:44:06.470149    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.470154    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.470161    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.470165    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.470170    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.470311    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.470497    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:44:06.961965    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:06.961985    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.961997    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.962006    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.965901    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:06.965916    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.965924    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.965930    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.965936    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.965943    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.965949    9122 round_trippers.go:580]     Audit-Id: c786cc2d-70d0-40cf-87fd-35437d0a5d15
	I1025 20:44:06.965956    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.966065    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"1060","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8003 chars]
	I1025 20:44:06.966454    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.966478    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.966484    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.966489    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.968277    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.968286    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.968292    9122 round_trippers.go:580]     Audit-Id: ea145754-1556-408b-93cf-533da6a0c5bc
	I1025 20:44:06.968299    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.968307    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.968314    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.968318    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.968338    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.968573    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.968770    9122 pod_ready.go:92] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.968782    9122 pod_ready.go:81] duration metric: took 13.572479873s waiting for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.968795    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.968827    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-48p2l
	I1025 20:44:06.968831    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.968837    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.968842    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.970764    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.970773    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.970779    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.970784    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.970789    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.970793    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.970799    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.970804    9122 round_trippers.go:580]     Audit-Id: 193a8eff-4bfe-4043-b64e-a3a6419ef31f
	I1025 20:44:06.970849    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-48p2l","generateName":"kube-proxy-","namespace":"kube-system","uid":"cf96a572-bbca-4af2-bd3e-7d377772cef4","resourceVersion":"1004","creationTimestamp":"2022-10-26T03:38:58Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1025 20:44:06.971079    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.971084    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.971090    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.971095    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.972844    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.972854    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.972861    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.972866    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.972871    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.972876    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.972880    9122 round_trippers.go:580]     Audit-Id: 6ce688f2-525e-4ef0-a5d9-770b52d70799
	I1025 20:44:06.972886    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.972937    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.973109    9122 pod_ready.go:92] pod "kube-proxy-48p2l" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.973116    9122 pod_ready.go:81] duration metric: took 4.315082ms waiting for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.973121    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.973149    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-9j45q
	I1025 20:44:06.973153    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.973158    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.973164    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.974917    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.974926    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.974931    9122 round_trippers.go:580]     Audit-Id: 11675016-fe8d-4bbf-8c1a-34a9a05cadef
	I1025 20:44:06.974936    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.974941    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.974946    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.974951    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.974955    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.975021    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9j45q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3494f97-7b4b-4072-83ad-9a8308ed6c9b","resourceVersion":"922","creationTimestamp":"2022-10-26T03:40:04Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1025 20:44:06.975259    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m03
	I1025 20:44:06.975265    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.975271    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.975276    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.976674    9122 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1025 20:44:06.976682    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.976687    9122 round_trippers.go:580]     Audit-Id: 4833800c-c6d2-4b0a-a530-2a6ceeffee0b
	I1025 20:44:06.976692    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.976697    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.976702    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.976706    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.976711    9122 round_trippers.go:580]     Content-Length: 210
	I1025 20:44:06.976715    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.976724    9122 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-203818-m03\" not found","reason":"NotFound","details":{"name":"multinode-203818-m03","kind":"nodes"},"code":404}
	I1025 20:44:06.976819    9122 pod_ready.go:97] node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:06.976826    9122 pod_ready.go:81] duration metric: took 3.700964ms waiting for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	E1025 20:44:06.976831    9122 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:06.976836    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.976859    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-j799s
	I1025 20:44:06.976863    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.976868    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.976873    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.978358    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.978366    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.978371    9122 round_trippers.go:580]     Audit-Id: 4a677621-bea5-4950-a02b-1d57cf293fdf
	I1025 20:44:06.978375    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.978381    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.978385    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.978391    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.978395    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.978437    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j799s","generateName":"kube-proxy-","namespace":"kube-system","uid":"281b0817-ab50-4c73-b20e-0774fcc2f594","resourceVersion":"840","creationTimestamp":"2022-10-26T03:39:21Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:39:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1025 20:44:06.978664    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:06.978670    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.978676    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.978681    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.980460    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.980469    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.980475    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.980484    9122 round_trippers.go:580]     Audit-Id: efe4cdac-9115-4574-a5a1-499a843dfa63
	I1025 20:44:06.980489    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.980493    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.980499    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.980504    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.980541    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818-m02","uid":"7c7037c9-edec-40ae-94ec-6fc8e2997faa","resourceVersion":"854","creationTimestamp":"2022-10-26T03:42:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4537 chars]
	I1025 20:44:06.980705    9122 pod_ready.go:92] pod "kube-proxy-j799s" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.980711    9122 pod_ready.go:81] duration metric: took 3.871169ms waiting for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.980717    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.980764    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-203818
	I1025 20:44:06.980768    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.980773    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.980779    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.982773    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.982782    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.982788    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.982793    9122 round_trippers.go:580]     Audit-Id: 33cd74c5-82df-42cc-b7bf-de8de4cd7bbb
	I1025 20:44:06.982798    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.982803    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.982807    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.982812    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.982863    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-203818","namespace":"kube-system","uid":"352db6de-72fe-4aaa-b7b7-79881ea11d8e","resourceVersion":"1029","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"134975eec8557874af571021bafa86c4","kubernetes.io/config.mirror":"134975eec8557874af571021bafa86c4","kubernetes.io/config.seen":"2022-10-26T03:38:46.168181423Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1025 20:44:06.983059    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.983065    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.983071    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.983077    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.984731    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.984740    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.984745    9122 round_trippers.go:580]     Audit-Id: 8e11e625-e7de-42b0-9b3f-7810ffc92c85
	I1025 20:44:06.984750    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.984755    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.984759    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.984765    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.984769    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.984812    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.984998    9122 pod_ready.go:92] pod "kube-scheduler-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.985003    9122 pod_ready.go:81] duration metric: took 4.282774ms waiting for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.985009    9122 pod_ready.go:38] duration metric: took 13.613826595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:44:06.985018    9122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 20:44:06.992881    9122 command_runner.go:130] > -16
	I1025 20:44:06.993003    9122 ops.go:34] apiserver oom_adj: -16
	I1025 20:44:06.993010    9122 kubeadm.go:631] restartCluster took 25.079628184s
	I1025 20:44:06.993017    9122 kubeadm.go:398] StartCluster complete in 25.108328524s
	I1025 20:44:06.993029    9122 settings.go:142] acquiring lock: {Name:mk8a865dc85ed559178cd0a5f8f4fdd48ae81a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:44:06.993100    9122 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:06.993487    9122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/kubeconfig: {Name:mke147bd0f9c02680989e4cfb1c572f71a0430b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:44:06.993952    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:06.994118    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:44:06.994332    9122 round_trippers.go:463] GET https://127.0.0.1:51345/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 20:44:06.994338    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.994343    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.994349    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.996382    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:06.996391    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.996396    9122 round_trippers.go:580]     Content-Length: 292
	I1025 20:44:06.996401    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.996406    9122 round_trippers.go:580]     Audit-Id: 5c46012d-92c6-4b2b-b5de-464b29383ac5
	I1025 20:44:06.996410    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.996415    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.996420    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.996424    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.996435    9122 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"47d25851-4c75-45e2-a9b2-efff685984f8","resourceVersion":"1045","creationTimestamp":"2022-10-26T03:38:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 20:44:06.996505    9122 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-203818" rescaled to 1
	I1025 20:44:06.996536    9122 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 20:44:06.996547    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 20:44:06.996572    9122 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1025 20:44:06.996689    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:07.037601    9122 out.go:177] * Verifying Kubernetes components...
	I1025 20:44:07.037661    9122 addons.go:65] Setting storage-provisioner=true in profile "multinode-203818"
	I1025 20:44:07.037669    9122 addons.go:65] Setting default-storageclass=true in profile "multinode-203818"
	I1025 20:44:07.058856    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:44:07.058860    9122 addons.go:153] Setting addon storage-provisioner=true in "multinode-203818"
	I1025 20:44:07.055512    9122 command_runner.go:130] > apiVersion: v1
	W1025 20:44:07.058873    9122 addons.go:162] addon storage-provisioner should already be in state true
	I1025 20:44:07.058866    9122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-203818"
	I1025 20:44:07.058897    9122 command_runner.go:130] > data:
	I1025 20:44:07.058904    9122 command_runner.go:130] >   Corefile: |
	I1025 20:44:07.058907    9122 command_runner.go:130] >     .:53 {
	I1025 20:44:07.058912    9122 command_runner.go:130] >         errors
	I1025 20:44:07.058922    9122 command_runner.go:130] >         health {
	I1025 20:44:07.058934    9122 command_runner.go:130] >            lameduck 5s
	I1025 20:44:07.058937    9122 command_runner.go:130] >         }
	I1025 20:44:07.058941    9122 command_runner.go:130] >         ready
	I1025 20:44:07.058948    9122 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 20:44:07.058948    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:07.058952    9122 command_runner.go:130] >            pods insecure
	I1025 20:44:07.058963    9122 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 20:44:07.058972    9122 command_runner.go:130] >            ttl 30
	I1025 20:44:07.058977    9122 command_runner.go:130] >         }
	I1025 20:44:07.058984    9122 command_runner.go:130] >         prometheus :9153
	I1025 20:44:07.058989    9122 command_runner.go:130] >         hosts {
	I1025 20:44:07.058993    9122 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I1025 20:44:07.058998    9122 command_runner.go:130] >            fallthrough
	I1025 20:44:07.059001    9122 command_runner.go:130] >         }
	I1025 20:44:07.059006    9122 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 20:44:07.059010    9122 command_runner.go:130] >            max_concurrent 1000
	I1025 20:44:07.059013    9122 command_runner.go:130] >         }
	I1025 20:44:07.059024    9122 command_runner.go:130] >         cache 30
	I1025 20:44:07.059027    9122 command_runner.go:130] >         loop
	I1025 20:44:07.059036    9122 command_runner.go:130] >         reload
	I1025 20:44:07.059039    9122 command_runner.go:130] >         loadbalance
	I1025 20:44:07.059043    9122 command_runner.go:130] >     }
	I1025 20:44:07.059046    9122 command_runner.go:130] > kind: ConfigMap
	I1025 20:44:07.059049    9122 command_runner.go:130] > metadata:
	I1025 20:44:07.059053    9122 command_runner.go:130] >   creationTimestamp: "2022-10-26T03:38:46Z"
	I1025 20:44:07.059056    9122 command_runner.go:130] >   name: coredns
	I1025 20:44:07.059060    9122 command_runner.go:130] >   namespace: kube-system
	I1025 20:44:07.059064    9122 command_runner.go:130] >   resourceVersion: "373"
	I1025 20:44:07.059070    9122 command_runner.go:130] >   uid: 537c1da9-7d52-4ec3-a656-99d3d1685483
	I1025 20:44:07.059150    9122 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 20:44:07.059196    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:44:07.059309    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:44:07.069777    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:07.130701    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:07.151236    9122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 20:44:07.151609    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:44:07.172099    9122 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 20:44:07.172122    9122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 20:44:07.172214    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:07.173038    9122 round_trippers.go:463] GET https://127.0.0.1:51345/apis/storage.k8s.io/v1/storageclasses
	I1025 20:44:07.173192    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.173249    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.173271    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.177352    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:07.177367    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.177373    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.177377    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.177396    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.177402    9122 round_trippers.go:580]     Content-Length: 1274
	I1025 20:44:07.177407    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.177412    9122 round_trippers.go:580]     Audit-Id: d89b0661-6f2c-45c2-9fc8-38cc7566449a
	I1025 20:44:07.177417    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.177474    9122 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"standard","uid":"3755b1f0-0744-497d-9808-f887a0391448","resourceVersion":"382","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I1025 20:44:07.179150    9122 node_ready.go:35] waiting up to 6m0s for node "multinode-203818" to be "Ready" ...
	I1025 20:44:07.179213    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:07.179218    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.179224    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.179232    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.179234    9122 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3755b1f0-0744-497d-9808-f887a0391448","resourceVersion":"382","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 20:44:07.179272    9122 round_trippers.go:463] PUT https://127.0.0.1:51345/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 20:44:07.179278    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.179284    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.179292    9122 round_trippers.go:473]     Content-Type: application/json
	I1025 20:44:07.179300    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.181672    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:07.181684    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.181689    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.181694    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.181698    9122 round_trippers.go:580]     Audit-Id: d224ba65-c794-4398-8ac3-629ac25a051d
	I1025 20:44:07.181704    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.181709    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.181714    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.181805    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:07.182026    9122 node_ready.go:49] node "multinode-203818" has status "Ready":"True"
	I1025 20:44:07.182033    9122 node_ready.go:38] duration metric: took 2.8645ms waiting for node "multinode-203818" to be "Ready" ...
	I1025 20:44:07.182040    9122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:44:07.182753    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:07.182763    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.182769    9122 round_trippers.go:580]     Audit-Id: d0854b58-9b99-4067-8510-a900b6a8d0d0
	I1025 20:44:07.182773    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.182779    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.182783    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.182788    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.182793    9122 round_trippers.go:580]     Content-Length: 1220
	I1025 20:44:07.182798    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.182814    9122 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3755b1f0-0744-497d-9808-f887a0391448","resourceVersion":"382","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 20:44:07.182874    9122 addons.go:153] Setting addon default-storageclass=true in "multinode-203818"
	W1025 20:44:07.182882    9122 addons.go:162] addon default-storageclass should already be in state true
	I1025 20:44:07.182896    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:07.183190    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:44:07.240259    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:07.246626    9122 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 20:44:07.246637    9122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 20:44:07.246708    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:07.309890    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:07.333229    9122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 20:44:07.362706    9122 request.go:614] Waited for 180.629775ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:07.362750    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:07.362755    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.362761    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.362768    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.366669    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:07.366682    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.366688    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.366692    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.366703    9122 round_trippers.go:580]     Audit-Id: 0e0a1e65-2e31-48d8-baf2-ed42a4bd3e8b
	I1025 20:44:07.366710    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.366714    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.366718    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.367948    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84959 chars]
	I1025 20:44:07.369937    9122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:07.400416    9122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 20:44:07.485959    9122 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1025 20:44:07.487687    9122 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1025 20:44:07.489063    9122 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 20:44:07.490820    9122 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 20:44:07.492465    9122 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1025 20:44:07.498462    9122 command_runner.go:130] > pod/storage-provisioner configured
	I1025 20:44:07.562086    9122 request.go:614] Waited for 192.095418ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/coredns-565d847f94-tvhv6
	I1025 20:44:07.562120    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/coredns-565d847f94-tvhv6
	I1025 20:44:07.562124    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.562130    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.562135    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.566435    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:07.566451    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.566456    9122 round_trippers.go:580]     Audit-Id: f140f3a5-df29-4e64-bd10-9e667a723d2b
	I1025 20:44:07.566462    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.566466    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.566471    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.566475    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.566480    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.566561    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6551 chars]
	I1025 20:44:07.620355    9122 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1025 20:44:07.648263    9122 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 20:44:07.691569    9122 addons.go:414] enableAddons completed in 694.998234ms
	I1025 20:44:07.762021    9122 request.go:614] Waited for 195.136053ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:07.762058    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:07.762065    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.762073    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.762084    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.764656    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:07.764669    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.764678    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.764686    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.764693    9122 round_trippers.go:580]     Audit-Id: e4c82b80-5d42-4fce-bd6a-3050cd08ff1c
	I1025 20:44:07.764700    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.764706    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.764718    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.764972    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:07.765194    9122 pod_ready.go:92] pod "coredns-565d847f94-tvhv6" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:07.765201    9122 pod_ready.go:81] duration metric: took 395.254043ms waiting for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:07.765207    9122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:07.961986    9122 request.go:614] Waited for 196.741024ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/etcd-multinode-203818
	I1025 20:44:07.962138    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/etcd-multinode-203818
	I1025 20:44:07.962151    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.962163    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.962174    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.966307    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:07.966323    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.966330    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.966336    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.966345    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.966352    9122 round_trippers.go:580]     Audit-Id: 51b91cac-b510-47b9-88b4-ca1c46e71e93
	I1025 20:44:07.966361    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.966369    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.966453    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-203818","namespace":"kube-system","uid":"49b2d2ea-40ad-40fa-bab3-93930d3e9d10","resourceVersion":"1058","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.mirror":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.seen":"2022-10-26T03:38:46.168169599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6046 chars]
	I1025 20:44:08.162808    9122 request.go:614] Waited for 196.025571ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.162905    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.162921    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.162933    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.162944    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.166817    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.166835    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.166851    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.166859    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.166865    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.166871    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.166878    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.166884    9122 round_trippers.go:580]     Audit-Id: 6c70b420-3766-4887-98bd-ab76e8a7723c
	I1025 20:44:08.167478    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:08.167685    9122 pod_ready.go:92] pod "etcd-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:08.167692    9122 pod_ready.go:81] duration metric: took 402.479342ms waiting for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.167706    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.364018    9122 request.go:614] Waited for 196.222595ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-203818
	I1025 20:44:08.364077    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-203818
	I1025 20:44:08.364089    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.364107    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.364119    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.367813    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.367831    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.367841    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.367863    9122 round_trippers.go:580]     Audit-Id: 98b5633c-6893-45a7-9c41-806715c762aa
	I1025 20:44:08.367874    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.367880    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.367886    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.367912    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.368247    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-203818","namespace":"kube-system","uid":"e95d0701-3478-4373-8740-541b9481b83a","resourceVersion":"1056","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.mirror":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.seen":"2022-10-26T03:38:46.168180019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8428 chars]
	I1025 20:44:08.562157    9122 request.go:614] Waited for 193.608524ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.562299    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.562310    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.562321    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.562333    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.566288    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.566307    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.566316    9122 round_trippers.go:580]     Audit-Id: f3046232-4ada-4e5c-857d-bd735606aa80
	I1025 20:44:08.566323    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.566329    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.566335    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.566346    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.566354    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.566448    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:08.566752    9122 pod_ready.go:92] pod "kube-apiserver-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:08.566759    9122 pod_ready.go:81] duration metric: took 399.048922ms waiting for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.566765    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.762597    9122 request.go:614] Waited for 195.717109ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:08.762666    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:08.762677    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.762691    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.762702    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.767351    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:08.767366    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.767373    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.767380    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.767387    9122 round_trippers.go:580]     Audit-Id: afa51a08-8d07-4613-ae98-b0f2c5f7c4b0
	I1025 20:44:08.767394    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.767402    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.767406    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.767591    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"1060","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8003 chars]
	I1025 20:44:08.963074    9122 request.go:614] Waited for 195.066372ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.963127    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.963135    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.963146    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.963159    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.966919    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.966935    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.966943    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.966950    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.966958    9122 round_trippers.go:580]     Audit-Id: 2c709be6-e90f-4e55-8517-1d00601a24a4
	I1025 20:44:08.966965    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.966972    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.966979    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.967049    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:08.967332    9122 pod_ready.go:92] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:08.967339    9122 pod_ready.go:81] duration metric: took 400.568509ms waiting for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.967346    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.164022    9122 request.go:614] Waited for 196.623899ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-48p2l
	I1025 20:44:09.164137    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-48p2l
	I1025 20:44:09.164147    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.164159    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.164170    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.168358    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:09.168373    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.168380    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.168390    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.168397    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.168404    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.168410    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.168417    9122 round_trippers.go:580]     Audit-Id: 2624caff-b6fa-41b3-8bb1-cec9241d53a4
	I1025 20:44:09.168488    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-48p2l","generateName":"kube-proxy-","namespace":"kube-system","uid":"cf96a572-bbca-4af2-bd3e-7d377772cef4","resourceVersion":"1004","creationTimestamp":"2022-10-26T03:38:58Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1025 20:44:09.363840    9122 request.go:614] Waited for 195.001817ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:09.364013    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:09.364024    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.364035    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.364045    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.368021    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:09.368037    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.368045    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.368052    9122 round_trippers.go:580]     Audit-Id: 3cf4fa20-c988-4f3e-bb9a-5f5a6e5fb2b1
	I1025 20:44:09.368059    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.368070    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.368077    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.368083    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.368155    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:09.368414    9122 pod_ready.go:92] pod "kube-proxy-48p2l" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:09.368424    9122 pod_ready.go:81] duration metric: took 401.071604ms waiting for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.368432    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.562008    9122 request.go:614] Waited for 193.52219ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-9j45q
	I1025 20:44:09.562057    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-9j45q
	I1025 20:44:09.562065    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.562077    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.562091    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.566140    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:09.566169    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.566176    9122 round_trippers.go:580]     Audit-Id: 276eab66-ab1b-4b1e-98e7-1e40f2ac558d
	I1025 20:44:09.566187    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.566192    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.566197    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.566202    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.566211    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.566313    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9j45q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3494f97-7b4b-4072-83ad-9a8308ed6c9b","resourceVersion":"922","creationTimestamp":"2022-10-26T03:40:04Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1025 20:44:09.761966    9122 request.go:614] Waited for 195.374246ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m03
	I1025 20:44:09.762046    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m03
	I1025 20:44:09.762051    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.762059    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.762066    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.764502    9122 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1025 20:44:09.764512    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.764517    9122 round_trippers.go:580]     Audit-Id: 28eea3d3-7b04-4e39-b277-8ff1f2a410ab
	I1025 20:44:09.764522    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.764527    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.764533    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.764539    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.764544    9122 round_trippers.go:580]     Content-Length: 210
	I1025 20:44:09.764548    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.764564    9122 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-203818-m03\" not found","reason":"NotFound","details":{"name":"multinode-203818-m03","kind":"nodes"},"code":404}
	I1025 20:44:09.764627    9122 pod_ready.go:97] node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:09.764636    9122 pod_ready.go:81] duration metric: took 396.199602ms waiting for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	E1025 20:44:09.764642    9122 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:09.764655    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.962736    9122 request.go:614] Waited for 197.986422ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-j799s
	I1025 20:44:09.962817    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-j799s
	I1025 20:44:09.962827    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.962841    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.962852    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.966803    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:09.966817    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.966824    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.966838    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.966845    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.966851    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.966863    9122 round_trippers.go:580]     Audit-Id: 3c39cbc8-d1c0-4cee-b472-86989131f6cd
	I1025 20:44:09.966870    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.966941    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j799s","generateName":"kube-proxy-","namespace":"kube-system","uid":"281b0817-ab50-4c73-b20e-0774fcc2f594","resourceVersion":"840","creationTimestamp":"2022-10-26T03:39:21Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:39:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1025 20:44:10.164023    9122 request.go:614] Waited for 196.719446ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:10.164102    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:10.164110    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.164134    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.164146    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.168057    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.168072    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.168080    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.168086    9122 round_trippers.go:580]     Audit-Id: 59d79d8a-cccf-49bd-a9f1-674d1f3bb491
	I1025 20:44:10.168093    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.168100    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.168106    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.168113    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.168181    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818-m02","uid":"7c7037c9-edec-40ae-94ec-6fc8e2997faa","resourceVersion":"854","creationTimestamp":"2022-10-26T03:42:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4537 chars]
	I1025 20:44:10.168449    9122 pod_ready.go:92] pod "kube-proxy-j799s" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:10.168456    9122 pod_ready.go:81] duration metric: took 403.796794ms waiting for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:10.168465    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:10.364040    9122 request.go:614] Waited for 195.517365ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-203818
	I1025 20:44:10.364151    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-203818
	I1025 20:44:10.364161    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.364172    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.364183    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.367594    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.367616    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.367624    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.367631    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.367637    9122 round_trippers.go:580]     Audit-Id: a79ccc07-97bc-4b16-bab4-0a2fd9f37651
	I1025 20:44:10.367646    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.367652    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.367658    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.367782    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-203818","namespace":"kube-system","uid":"352db6de-72fe-4aaa-b7b7-79881ea11d8e","resourceVersion":"1029","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"134975eec8557874af571021bafa86c4","kubernetes.io/config.mirror":"134975eec8557874af571021bafa86c4","kubernetes.io/config.seen":"2022-10-26T03:38:46.168181423Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1025 20:44:10.562867    9122 request.go:614] Waited for 194.761379ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:10.562952    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:10.562972    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.562991    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.563010    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.566796    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.566811    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.566819    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.566825    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.566832    9122 round_trippers.go:580]     Audit-Id: f762d591-1906-418c-b2ee-9039b32f2b79
	I1025 20:44:10.566843    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.566850    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.566856    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.566926    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:10.567181    9122 pod_ready.go:92] pod "kube-scheduler-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:10.567193    9122 pod_ready.go:81] duration metric: took 398.718424ms waiting for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:10.567219    9122 pod_ready.go:38] duration metric: took 3.385153731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:44:10.567234    9122 api_server.go:51] waiting for apiserver process to appear ...
	I1025 20:44:10.567279    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:44:10.576356    9122 command_runner.go:130] > 1821
	I1025 20:44:10.576881    9122 api_server.go:71] duration metric: took 3.580329189s to wait for apiserver process to appear ...
	I1025 20:44:10.576901    9122 api_server.go:87] waiting for apiserver healthz status ...
	I1025 20:44:10.576909    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:44:10.587904    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 200:
	ok
	I1025 20:44:10.587979    9122 round_trippers.go:463] GET https://127.0.0.1:51345/version
	I1025 20:44:10.587986    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.587995    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.588003    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.589294    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:10.589308    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.589320    9122 round_trippers.go:580]     Audit-Id: 52557332-ebc9-43b2-a25e-be48ba582fd6
	I1025 20:44:10.589328    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.589335    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.589342    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.589349    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.589355    9122 round_trippers.go:580]     Content-Length: 263
	I1025 20:44:10.589362    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.589385    9122 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 20:44:10.589428    9122 api_server.go:140] control plane version: v1.25.3
	I1025 20:44:10.589437    9122 api_server.go:130] duration metric: took 12.529993ms to wait for apiserver health ...
	I1025 20:44:10.589444    9122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 20:44:10.762106    9122 request.go:614] Waited for 172.611538ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:10.762239    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:10.762249    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.762263    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.762274    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.767195    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:10.767208    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.767214    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.767220    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.767226    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.767233    9122 round_trippers.go:580]     Audit-Id: 5462aff0-b123-4039-a710-894a41f9557d
	I1025 20:44:10.767240    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.767251    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.768603    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84959 chars]
	I1025 20:44:10.770508    9122 system_pods.go:59] 12 kube-system pods found
	I1025 20:44:10.770518    9122 system_pods.go:61] "coredns-565d847f94-tvhv6" [c89eabb7-66d0-469a-8966-ceeb6f9b215e] Running
	I1025 20:44:10.770522    9122 system_pods.go:61] "etcd-multinode-203818" [49b2d2ea-40ad-40fa-bab3-93930d3e9d10] Running
	I1025 20:44:10.770527    9122 system_pods.go:61] "kindnet-8xvrw" [a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07] Running
	I1025 20:44:10.770531    9122 system_pods.go:61] "kindnet-l9tx2" [0bc050f8-3916-4ad8-9eca-ec2de9c7c4d9] Running
	I1025 20:44:10.770534    9122 system_pods.go:61] "kindnet-q9qv5" [d5252527-eabb-4b78-9901-bfb15f51fc1b] Running
	I1025 20:44:10.770538    9122 system_pods.go:61] "kube-apiserver-multinode-203818" [e95d0701-3478-4373-8740-541b9481b83a] Running
	I1025 20:44:10.770542    9122 system_pods.go:61] "kube-controller-manager-multinode-203818" [cade2617-19dd-49f7-940e-d92e7b847fb0] Running
	I1025 20:44:10.770545    9122 system_pods.go:61] "kube-proxy-48p2l" [cf96a572-bbca-4af2-bd3e-7d377772cef4] Running
	I1025 20:44:10.770549    9122 system_pods.go:61] "kube-proxy-9j45q" [f3494f97-7b4b-4072-83ad-9a8308ed6c9b] Running
	I1025 20:44:10.770552    9122 system_pods.go:61] "kube-proxy-j799s" [281b0817-ab50-4c73-b20e-0774fcc2f594] Running
	I1025 20:44:10.770556    9122 system_pods.go:61] "kube-scheduler-multinode-203818" [352db6de-72fe-4aaa-b7b7-79881ea11d8e] Running
	I1025 20:44:10.770560    9122 system_pods.go:61] "storage-provisioner" [93c13130-1e73-4433-b82f-b565797df5c6] Running
	I1025 20:44:10.770564    9122 system_pods.go:74] duration metric: took 181.114965ms to wait for pod list to return data ...
	I1025 20:44:10.770569    9122 default_sa.go:34] waiting for default service account to be created ...
	I1025 20:44:10.962661    9122 request.go:614] Waited for 191.924378ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/default/serviceaccounts
	I1025 20:44:10.962711    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/default/serviceaccounts
	I1025 20:44:10.962723    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.962736    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.962748    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.966532    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.966547    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.966554    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.966560    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.966568    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.966575    9122 round_trippers.go:580]     Content-Length: 262
	I1025 20:44:10.966581    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.966588    9122 round_trippers.go:580]     Audit-Id: 10317f59-21a4-4558-8d84-d304f235334f
	I1025 20:44:10.966594    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.966611    9122 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"03440f91-d2ed-423b-adea-19369256c600","resourceVersion":"314","creationTimestamp":"2022-10-26T03:38:58Z"}}]}
	I1025 20:44:10.966769    9122 default_sa.go:45] found service account: "default"
	I1025 20:44:10.966778    9122 default_sa.go:55] duration metric: took 196.204791ms for default service account to be created ...
	I1025 20:44:10.966784    9122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 20:44:11.164179    9122 request.go:614] Waited for 197.237952ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:11.164246    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:11.164255    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:11.164272    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:11.164284    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:11.169255    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:11.169267    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:11.169272    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:11.169277    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:11 GMT
	I1025 20:44:11.169283    9122 round_trippers.go:580]     Audit-Id: 01e85a7b-96dc-41df-9787-3861240a51d5
	I1025 20:44:11.169289    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:11.169296    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:11.169301    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:11.170183    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1061"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84959 chars]
	I1025 20:44:11.172092    9122 system_pods.go:86] 12 kube-system pods found
	I1025 20:44:11.172102    9122 system_pods.go:89] "coredns-565d847f94-tvhv6" [c89eabb7-66d0-469a-8966-ceeb6f9b215e] Running
	I1025 20:44:11.172106    9122 system_pods.go:89] "etcd-multinode-203818" [49b2d2ea-40ad-40fa-bab3-93930d3e9d10] Running
	I1025 20:44:11.172110    9122 system_pods.go:89] "kindnet-8xvrw" [a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07] Running
	I1025 20:44:11.172115    9122 system_pods.go:89] "kindnet-l9tx2" [0bc050f8-3916-4ad8-9eca-ec2de9c7c4d9] Running
	I1025 20:44:11.172119    9122 system_pods.go:89] "kindnet-q9qv5" [d5252527-eabb-4b78-9901-bfb15f51fc1b] Running
	I1025 20:44:11.172122    9122 system_pods.go:89] "kube-apiserver-multinode-203818" [e95d0701-3478-4373-8740-541b9481b83a] Running
	I1025 20:44:11.172131    9122 system_pods.go:89] "kube-controller-manager-multinode-203818" [cade2617-19dd-49f7-940e-d92e7b847fb0] Running
	I1025 20:44:11.172135    9122 system_pods.go:89] "kube-proxy-48p2l" [cf96a572-bbca-4af2-bd3e-7d377772cef4] Running
	I1025 20:44:11.172138    9122 system_pods.go:89] "kube-proxy-9j45q" [f3494f97-7b4b-4072-83ad-9a8308ed6c9b] Running
	I1025 20:44:11.172144    9122 system_pods.go:89] "kube-proxy-j799s" [281b0817-ab50-4c73-b20e-0774fcc2f594] Running
	I1025 20:44:11.172148    9122 system_pods.go:89] "kube-scheduler-multinode-203818" [352db6de-72fe-4aaa-b7b7-79881ea11d8e] Running
	I1025 20:44:11.172151    9122 system_pods.go:89] "storage-provisioner" [93c13130-1e73-4433-b82f-b565797df5c6] Running
	I1025 20:44:11.172155    9122 system_pods.go:126] duration metric: took 205.367711ms to wait for k8s-apps to be running ...
	I1025 20:44:11.172160    9122 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 20:44:11.172222    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:44:11.181462    9122 system_svc.go:56] duration metric: took 9.295705ms WaitForService to wait for kubelet.
	I1025 20:44:11.181480    9122 kubeadm.go:573] duration metric: took 4.184928507s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 20:44:11.181496    9122 node_conditions.go:102] verifying NodePressure condition ...
	I1025 20:44:11.362010    9122 request.go:614] Waited for 180.448649ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes
	I1025 20:44:11.362038    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes
	I1025 20:44:11.362042    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:11.362048    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:11.362054    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:11.364620    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:11.364629    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:11.364634    9122 round_trippers.go:580]     Audit-Id: 10b9264a-9a8a-4598-9808-eb4cc28e5be9
	I1025 20:44:11.364639    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:11.364646    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:11.364653    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:11.364659    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:11.364664    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:11 GMT
	I1025 20:44:11.364750    9122 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1061"},"items":[{"metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 10905 chars]
	I1025 20:44:11.365082    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:44:11.365089    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:44:11.365097    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:44:11.365101    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:44:11.365105    9122 node_conditions.go:105] duration metric: took 183.6044ms to run NodePressure ...
	I1025 20:44:11.365111    9122 start.go:217] waiting for startup goroutines ...
	I1025 20:44:11.365800    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:11.365863    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:44:11.409835    9122 out.go:177] * Starting worker node multinode-203818-m02 in cluster multinode-203818
	I1025 20:44:11.431674    9122 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:44:11.453923    9122 out.go:177] * Pulling base image ...
	I1025 20:44:11.496714    9122 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:44:11.496753    9122 cache.go:57] Caching tarball of preloaded images
	I1025 20:44:11.496776    9122 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:44:11.496926    9122 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 20:44:11.496949    9122 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 20:44:11.497862    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:44:11.560883    9122 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:44:11.560896    9122 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:44:11.560905    9122 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:44:11.560992    9122 start.go:364] acquiring machines lock for multinode-203818-m02: {Name:mk1c2c2ef868528130aa99eb339d96e0521be812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:44:11.561056    9122 start.go:368] acquired machines lock for "multinode-203818-m02" in 52.175µs
	I1025 20:44:11.561070    9122 start.go:96] Skipping create...Using existing machine configuration
	I1025 20:44:11.561075    9122 fix.go:55] fixHost starting: m02
	I1025 20:44:11.561324    9122 cli_runner.go:164] Run: docker container inspect multinode-203818-m02 --format={{.State.Status}}
	I1025 20:44:11.625979    9122 fix.go:103] recreateIfNeeded on multinode-203818-m02: state=Stopped err=<nil>
	W1025 20:44:11.626000    9122 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 20:44:11.647764    9122 out.go:177] * Restarting existing docker container for "multinode-203818-m02" ...
	I1025 20:44:11.723412    9122 cli_runner.go:164] Run: docker start multinode-203818-m02
	I1025 20:44:12.059372    9122 cli_runner.go:164] Run: docker container inspect multinode-203818-m02 --format={{.State.Status}}
	I1025 20:44:12.124383    9122 kic.go:415] container "multinode-203818-m02" state is running.
	I1025 20:44:12.125031    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:44:12.194372    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:44:12.194849    9122 machine.go:88] provisioning docker machine ...
	I1025 20:44:12.194880    9122 ubuntu.go:169] provisioning hostname "multinode-203818-m02"
	I1025 20:44:12.195019    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:12.271024    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:12.271206    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:12.271217    9122 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-203818-m02 && echo "multinode-203818-m02" | sudo tee /etc/hostname
	I1025 20:44:12.422715    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-203818-m02
	
	I1025 20:44:12.422808    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:12.488163    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:12.488370    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:12.488386    9122 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-203818-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-203818-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-203818-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 20:44:12.608786    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:44:12.608802    9122 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/14956-2080/.minikube CaCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/14956-2080/.minikube}
	I1025 20:44:12.608812    9122 ubuntu.go:177] setting up certificates
	I1025 20:44:12.608818    9122 provision.go:83] configureAuth start
	I1025 20:44:12.608882    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:44:12.677679    9122 provision.go:138] copyHostCerts
	I1025 20:44:12.677722    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:44:12.677763    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem, removing ...
	I1025 20:44:12.677768    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:44:12.677863    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem (1078 bytes)
	I1025 20:44:12.678028    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:44:12.678053    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem, removing ...
	I1025 20:44:12.678080    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:44:12.678142    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem (1123 bytes)
	I1025 20:44:12.678261    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:44:12.678284    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem, removing ...
	I1025 20:44:12.678289    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:44:12.678345    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem (1679 bytes)
	I1025 20:44:12.678483    9122 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem org=jenkins.multinode-203818-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-203818-m02]
	I1025 20:44:12.759757    9122 provision.go:172] copyRemoteCerts
	I1025 20:44:12.759832    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 20:44:12.759878    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:12.827796    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:12.915334    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 20:44:12.915430    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 20:44:12.933018    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 20:44:12.933078    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1025 20:44:12.949721    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 20:44:12.949801    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 20:44:12.966236    9122 provision.go:86] duration metric: configureAuth took 357.409581ms
	I1025 20:44:12.966251    9122 ubuntu.go:193] setting minikube options for container-runtime
	I1025 20:44:12.966418    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:12.966470    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.029690    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:13.029886    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:13.029896    9122 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 20:44:13.154210    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 20:44:13.154223    9122 ubuntu.go:71] root file system type: overlay
	I1025 20:44:13.154940    9122 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 20:44:13.155079    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.219797    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:13.219931    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:13.219978    9122 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 20:44:13.349991    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 20:44:13.350077    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.415362    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:13.415521    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:13.415535    9122 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 20:44:13.542653    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:44:13.542668    9122 machine.go:91] provisioned docker machine in 1.347809776s
	I1025 20:44:13.542674    9122 start.go:300] post-start starting for "multinode-203818-m02" (driver="docker")
	I1025 20:44:13.542679    9122 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 20:44:13.542734    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 20:44:13.542793    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.606825    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:13.693211    9122 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 20:44:13.696366    9122 command_runner.go:130] > NAME="Ubuntu"
	I1025 20:44:13.696383    9122 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1025 20:44:13.696387    9122 command_runner.go:130] > ID=ubuntu
	I1025 20:44:13.696392    9122 command_runner.go:130] > ID_LIKE=debian
	I1025 20:44:13.696396    9122 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1025 20:44:13.696400    9122 command_runner.go:130] > VERSION_ID="20.04"
	I1025 20:44:13.696406    9122 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 20:44:13.696410    9122 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 20:44:13.696414    9122 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 20:44:13.696423    9122 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 20:44:13.696429    9122 command_runner.go:130] > VERSION_CODENAME=focal
	I1025 20:44:13.696432    9122 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1025 20:44:13.696477    9122 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 20:44:13.696487    9122 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 20:44:13.696500    9122 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 20:44:13.696504    9122 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 20:44:13.696519    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/addons for local assets ...
	I1025 20:44:13.696622    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/files for local assets ...
	I1025 20:44:13.696768    9122 filesync.go:149] local asset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> 29162.pem in /etc/ssl/certs
	I1025 20:44:13.696775    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /etc/ssl/certs/29162.pem
	I1025 20:44:13.696903    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 20:44:13.703981    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:44:13.720777    9122 start.go:303] post-start completed in 178.09464ms
	I1025 20:44:13.720858    9122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:44:13.720908    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.784893    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:13.869271    9122 command_runner.go:130] > 6%
	I1025 20:44:13.869570    9122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:44:13.873397    9122 command_runner.go:130] > 92G
	I1025 20:44:13.873696    9122 fix.go:57] fixHost completed within 2.312617394s
	I1025 20:44:13.873705    9122 start.go:83] releasing machines lock for "multinode-203818-m02", held for 2.31264211s
	I1025 20:44:13.873768    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:44:13.957825    9122 out.go:177] * Found network options:
	I1025 20:44:13.979655    9122 out.go:177]   - NO_PROXY=192.168.58.2
	W1025 20:44:14.001593    9122 proxy.go:119] fail to check proxy env: Error ip not in block
	W1025 20:44:14.001645    9122 proxy.go:119] fail to check proxy env: Error ip not in block
	I1025 20:44:14.001845    9122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 20:44:14.001869    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 20:44:14.001967    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:14.001977    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:14.069310    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:14.069901    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:14.160131    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1025 20:44:14.173115    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:44:14.200970    9122 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 20:44:14.254199    9122 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 20:44:14.352590    9122 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 20:44:14.362985    9122 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 20:44:14.363525    9122 command_runner.go:130] > [Unit]
	I1025 20:44:14.363534    9122 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 20:44:14.363544    9122 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 20:44:14.363550    9122 command_runner.go:130] > BindsTo=containerd.service
	I1025 20:44:14.363555    9122 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 20:44:14.363559    9122 command_runner.go:130] > Wants=network-online.target
	I1025 20:44:14.363563    9122 command_runner.go:130] > Requires=docker.socket
	I1025 20:44:14.363568    9122 command_runner.go:130] > StartLimitBurst=3
	I1025 20:44:14.363571    9122 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 20:44:14.363574    9122 command_runner.go:130] > [Service]
	I1025 20:44:14.363578    9122 command_runner.go:130] > Type=notify
	I1025 20:44:14.363581    9122 command_runner.go:130] > Restart=on-failure
	I1025 20:44:14.363585    9122 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1025 20:44:14.363591    9122 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 20:44:14.363597    9122 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 20:44:14.363606    9122 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 20:44:14.363611    9122 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 20:44:14.363617    9122 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 20:44:14.363622    9122 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 20:44:14.363630    9122 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 20:44:14.363640    9122 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 20:44:14.363646    9122 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 20:44:14.363650    9122 command_runner.go:130] > ExecStart=
	I1025 20:44:14.363661    9122 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 20:44:14.363666    9122 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 20:44:14.363672    9122 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 20:44:14.363678    9122 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 20:44:14.363682    9122 command_runner.go:130] > LimitNOFILE=infinity
	I1025 20:44:14.363685    9122 command_runner.go:130] > LimitNPROC=infinity
	I1025 20:44:14.363689    9122 command_runner.go:130] > LimitCORE=infinity
	I1025 20:44:14.363694    9122 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 20:44:14.363699    9122 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 20:44:14.363702    9122 command_runner.go:130] > TasksMax=infinity
	I1025 20:44:14.363705    9122 command_runner.go:130] > TimeoutStartSec=0
	I1025 20:44:14.363711    9122 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 20:44:14.363714    9122 command_runner.go:130] > Delegate=yes
	I1025 20:44:14.363724    9122 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 20:44:14.363728    9122 command_runner.go:130] > KillMode=process
	I1025 20:44:14.363731    9122 command_runner.go:130] > [Install]
	I1025 20:44:14.363737    9122 command_runner.go:130] > WantedBy=multi-user.target
	I1025 20:44:14.363849    9122 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 20:44:14.363906    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 20:44:14.373079    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 20:44:14.384406    9122 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:44:14.384420    9122 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:44:14.385299    9122 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 20:44:14.449957    9122 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 20:44:14.515541    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:44:14.597736    9122 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 20:44:14.808775    9122 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 20:44:14.880558    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:44:14.950550    9122 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 20:44:14.959952    9122 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 20:44:14.960018    9122 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 20:44:14.964235    9122 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 20:44:14.964252    9122 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 20:44:14.964264    9122 command_runner.go:130] > Device: 100035h/1048629d	Inode: 130         Links: 1
	I1025 20:44:14.964273    9122 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 20:44:14.964284    9122 command_runner.go:130] > Access: 2022-10-26 03:44:14.911540563 +0000
	I1025 20:44:14.964291    9122 command_runner.go:130] > Modify: 2022-10-26 03:44:14.274540607 +0000
	I1025 20:44:14.964295    9122 command_runner.go:130] > Change: 2022-10-26 03:44:14.286540607 +0000
	I1025 20:44:14.964301    9122 command_runner.go:130] >  Birth: -
	I1025 20:44:14.964373    9122 start.go:472] Will wait 60s for crictl version
	I1025 20:44:14.964418    9122 ssh_runner.go:195] Run: sudo crictl version
	I1025 20:44:14.992780    9122 command_runner.go:130] > Version:  0.1.0
	I1025 20:44:14.992792    9122 command_runner.go:130] > RuntimeName:  docker
	I1025 20:44:14.992806    9122 command_runner.go:130] > RuntimeVersion:  20.10.18
	I1025 20:44:14.992815    9122 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1025 20:44:14.994850    9122 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 20:44:14.994921    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:44:15.019910    9122 command_runner.go:130] > 20.10.18
	I1025 20:44:15.022225    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:44:15.046873    9122 command_runner.go:130] > 20.10.18
	I1025 20:44:15.074290    9122 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 20:44:15.115184    9122 out.go:177]   - env NO_PROXY=192.168.58.2
	I1025 20:44:15.136476    9122 cli_runner.go:164] Run: docker exec -t multinode-203818-m02 dig +short host.docker.internal
	I1025 20:44:15.255350    9122 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 20:44:15.255485    9122 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 20:44:15.259923    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:44:15.269340    9122 certs.go:54] Setting up /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818 for IP: 192.168.58.3
	I1025 20:44:15.269449    9122 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key
	I1025 20:44:15.269494    9122 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key
	I1025 20:44:15.269523    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 20:44:15.269550    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 20:44:15.269565    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 20:44:15.269581    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 20:44:15.269670    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem (1338 bytes)
	W1025 20:44:15.269707    9122 certs.go:384] ignoring /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916_empty.pem, impossibly tiny 0 bytes
	I1025 20:44:15.269731    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 20:44:15.269764    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem (1078 bytes)
	I1025 20:44:15.269794    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem (1123 bytes)
	I1025 20:44:15.269820    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem (1679 bytes)
	I1025 20:44:15.269881    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:44:15.269914    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.269932    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.269950    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem -> /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.270326    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 20:44:15.287336    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 20:44:15.303765    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 20:44:15.320192    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 20:44:15.336782    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /usr/share/ca-certificates/29162.pem (1708 bytes)
	I1025 20:44:15.353371    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 20:44:15.370054    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem --> /usr/share/ca-certificates/2916.pem (1338 bytes)
	I1025 20:44:15.386711    9122 ssh_runner.go:195] Run: openssl version
	I1025 20:44:15.391381    9122 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1025 20:44:15.391745    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 20:44:15.399021    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.402726    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.402951    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.402998    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.407823    9122 command_runner.go:130] > b5213941
	I1025 20:44:15.408133    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 20:44:15.415053    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2916.pem && ln -fs /usr/share/ca-certificates/2916.pem /etc/ssl/certs/2916.pem"
	I1025 20:44:15.423389    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.427055    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.427126    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.427163    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.431956    9122 command_runner.go:130] > 51391683
	I1025 20:44:15.432270    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2916.pem /etc/ssl/certs/51391683.0"
	I1025 20:44:15.439614    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29162.pem && ln -fs /usr/share/ca-certificates/29162.pem /etc/ssl/certs/29162.pem"
	I1025 20:44:15.447130    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.450687    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.450749    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.450794    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.455626    9122 command_runner.go:130] > 3ec20f2e
	I1025 20:44:15.455978    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29162.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 20:44:15.462987    9122 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 20:44:15.527725    9122 command_runner.go:130] > systemd
	I1025 20:44:15.529863    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:44:15.529874    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:44:15.529887    9122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 20:44:15.529902    9122 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-203818 NodeName:multinode-203818-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 20:44:15.529985    9122 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-203818-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 20:44:15.530034    9122 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-203818-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 20:44:15.530092    9122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 20:44:15.536939    9122 command_runner.go:130] > kubeadm
	I1025 20:44:15.536947    9122 command_runner.go:130] > kubectl
	I1025 20:44:15.536951    9122 command_runner.go:130] > kubelet
	I1025 20:44:15.537650    9122 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 20:44:15.537697    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1025 20:44:15.544541    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1025 20:44:15.556554    9122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 20:44:15.568700    9122 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 20:44:15.572493    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:44:15.581924    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:15.582093    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:15.582090    9122 start.go:286] JoinCluster: &{Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:44:15.582174    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1025 20:44:15.582218    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:15.646255    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:15.774631    9122 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 
	I1025 20:44:15.778620    9122 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:15.778648    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:15.778864    9122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-203818-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1025 20:44:15.778906    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:15.842793    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:15.985149    9122 command_runner.go:130] > node/multinode-203818-m02 cordoned
	I1025 20:44:19.003270    9122 command_runner.go:130] > pod "busybox-65db55d5d6-jf8jp" has DeletionTimestamp older than 1 seconds, skipping
	I1025 20:44:19.003290    9122 command_runner.go:130] > node/multinode-203818-m02 drained
	I1025 20:44:19.006692    9122 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1025 20:44:19.006713    9122 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-q9qv5, kube-system/kube-proxy-j799s
	I1025 20:44:19.006734    9122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-203818-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.227854199s)
	I1025 20:44:19.006743    9122 node.go:109] successfully drained node "m02"
	I1025 20:44:19.007037    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:19.007225    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:44:19.007480    9122 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1025 20:44:19.007506    9122 round_trippers.go:463] DELETE https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:19.007510    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:19.007517    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:19.007522    9122 round_trippers.go:473]     Content-Type: application/json
	I1025 20:44:19.007527    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:19.010665    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:19.010677    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:19.010683    9122 round_trippers.go:580]     Audit-Id: dc7ed9be-6012-45e8-a1d4-601bc5c4655d
	I1025 20:44:19.010688    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:19.010696    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:19.010701    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:19.010706    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:19.010712    9122 round_trippers.go:580]     Content-Length: 171
	I1025 20:44:19.010717    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:19 GMT
	I1025 20:44:19.010729    9122 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-203818-m02","kind":"nodes","uid":"7c7037c9-edec-40ae-94ec-6fc8e2997faa"}}
	I1025 20:44:19.010752    9122 node.go:125] successfully deleted node "m02"
	I1025 20:44:19.010758    9122 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:19.010771    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:19.010781    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:44:19.046970    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:44:19.156652    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:44:19.156670    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 20:44:19.174304    9122 command_runner.go:130] ! W1026 03:44:19.053625    1094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:44:19.174317    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:44:19.174328    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:44:19.174334    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:44:19.174339    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:44:19.174347    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:44:19.174357    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:44:19.174364    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1025 20:44:19.174393    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:19.053625    1094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:19.174407    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:44:19.174416    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:44:19.210642    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:44:19.210658    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:19.210681    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:19.210708    9122 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:19.053625    1094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.257632    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:30.257757    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:44:30.292636    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:44:30.389620    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:44:30.389636    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 20:44:30.408437    9122 command_runner.go:130] ! W1026 03:44:30.307581    1763 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:44:30.408457    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:44:30.408465    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:44:30.408471    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:44:30.408477    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:44:30.408483    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:44:30.408492    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:44:30.408500    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1025 20:44:30.408537    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:30.307581    1763 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.408544    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:44:30.408552    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:44:30.443439    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:44:30.443453    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.443468    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.443478    9122 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:30.307581    1763 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.051305    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:52.051347    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:44:52.086855    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:44:52.182414    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:44:52.182431    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 20:44:52.200778    9122 command_runner.go:130] ! W1026 03:44:52.095980    2007 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:44:52.200792    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:44:52.200803    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:44:52.200808    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:44:52.200812    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:44:52.200818    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:44:52.200829    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:44:52.200836    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1025 20:44:52.200866    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:52.095980    2007 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.200877    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:44:52.200885    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:44:52.235377    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:44:52.235390    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.235413    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.235424    9122 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:52.095980    2007 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.438414    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:45:18.438478    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:45:18.474545    9122 command_runner.go:130] ! W1026 03:45:18.481202    2273 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:45:18.474818    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:45:18.498826    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:45:18.505886    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:45:18.561737    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:45:18.561751    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:45:18.586966    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:45:18.586979    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.590178    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:45:18.590190    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:45:18.590203    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1025 20:45:18.590234    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:18.481202    2273 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.590244    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:45:18.590255    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:45:18.626579    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:45:18.626595    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.626609    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.626619    9122 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:18.481202    2273 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.275205    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:45:50.275286    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:45:50.310625    9122 command_runner.go:130] ! W1026 03:45:50.318493    2605 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:45:50.310643    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:45:50.332848    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:45:50.337566    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:45:50.392780    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:45:50.392793    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:45:50.418654    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:45:50.418666    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.421800    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:45:50.421813    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:45:50.421820    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1025 20:45:50.421847    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:50.318493    2605 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.421857    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:45:50.421874    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:45:50.460197    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:45:50.460213    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.460233    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.460244    9122 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:50.318493    2605 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.271767    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:46:37.271843    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:46:37.307265    9122 command_runner.go:130] ! W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:46:37.307283    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:46:37.330921    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:46:37.337123    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:46:37.396912    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:46:37.396930    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:46:37.423054    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:46:37.423067    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.426213    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:46:37.426226    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:46:37.426236    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1025 20:46:37.426269    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.426277    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:46:37.426285    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:46:37.464275    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:46:37.464291    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.464306    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.464323    9122 start.go:288] JoinCluster complete in 2m21.882148659s
	I1025 20:46:37.486388    9122 out.go:177] 
	W1025 20:46:37.507422    9122 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 20:46:37.507454    9122 out.go:239] * 
	* 
	W1025 20:46:37.508746    9122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:46:37.594287    9122 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-203818 --wait=true -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-203818
helpers_test.go:235: (dbg) docker inspect multinode-203818:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "551ffd7f0135fe0d0094b55b3044183d19857825d6abc368207b3329a82ce511",
	        "Created": "2022-10-26T03:38:25.801320053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-26T03:43:38.079486968Z",
	            "FinishedAt": "2022-10-26T03:43:23.937929311Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/551ffd7f0135fe0d0094b55b3044183d19857825d6abc368207b3329a82ce511/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/551ffd7f0135fe0d0094b55b3044183d19857825d6abc368207b3329a82ce511/hostname",
	        "HostsPath": "/var/lib/docker/containers/551ffd7f0135fe0d0094b55b3044183d19857825d6abc368207b3329a82ce511/hosts",
	        "LogPath": "/var/lib/docker/containers/551ffd7f0135fe0d0094b55b3044183d19857825d6abc368207b3329a82ce511/551ffd7f0135fe0d0094b55b3044183d19857825d6abc368207b3329a82ce511-json.log",
	        "Name": "/multinode-203818",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-203818:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-203818",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0710f70d253b477592260ac6e1f833bf80884eace408c5d34ec4e90d6dd7033c-init/diff:/var/lib/docker/overlay2/9458c76ad567886b2941fe702595331447ec81af553bd6a5e305712ba6e99816/diff:/var/lib/docker/overlay2/f360822278c606190700446c63ea52e09800bb98b4011371f467c5329ccbfcdb/diff:/var/lib/docker/overlay2/d19b2a794f1a902d2cb81e3b717a0cbc2759ad547379336883f54acfc56f55aa/diff:/var/lib/docker/overlay2/2da5878d3547c20269c7d0a0c1fe821d0477558b5c9c8c15f108d8e6a7fbefd5/diff:/var/lib/docker/overlay2/8415b06fae0ecbcf9d1229e122da7dc6adef6f37fc541fe10e296454756df8d4/diff:/var/lib/docker/overlay2/3975772ef27829e60ff7a01cf11e459d24a06dd9acff5913f6c2e8275f0531c5/diff:/var/lib/docker/overlay2/3b0582df76ce9d3b29f45dbb3cfc3ec73cbe70e9df311b1864529e3946828d33/diff:/var/lib/docker/overlay2/40719af50c76ff060d79ba1be54c32127a4e49851d7d803f27a18352dfef2832/diff:/var/lib/docker/overlay2/9ccd8153ddc1bc61cae8a0cdd511730f47016b27273ad916204d1ce66039f5c4/diff:/var/lib/docker/overlay2/a99602
f01ac24af886b8248e9900864af0fbc776a4112056a1207b27942db176/diff:/var/lib/docker/overlay2/463c08b6020caddc0bc2b869257a9d4cdff5691d606db4e4a55ae8d203039fb8/diff:/var/lib/docker/overlay2/f3f67d9be6959cfcf69b9056b7af913fae3f9e6c74bec9bacc1f23592237c735/diff:/var/lib/docker/overlay2/f41ea619a41a3987b453fc5993510cda003cef6b896512fdbcd53c39a53c364a/diff:/var/lib/docker/overlay2/cef112361ca2ae2fcde83b778143cbe8b8ce1ddd1f07f8b353b65a088d962e3e/diff:/var/lib/docker/overlay2/ea61c71c4feb5341b51268b2cda82ee1878391b66787be6b295b21684f9a9096/diff:/var/lib/docker/overlay2/a6e559d447ffc610de1597df9b3c965ecc48305f9fcb4f3b43f48d38d43b166c/diff:/var/lib/docker/overlay2/a2dfaaa99882da5754ade275243ff8f867ab1bcc6ad23f15a45c08a117f95c80/diff:/var/lib/docker/overlay2/1518b34809b05558525693674d7a73d688ac90fbe38e233f58881e9d97cd9777/diff:/var/lib/docker/overlay2/c2cb7fb0ac5638040d2c9ed2728804b603304d44690876582ea2f4d1254c0c37/diff:/var/lib/docker/overlay2/fd6cf32d9b25daa7f585a0773f058b146cbd6d48c1c9cb208d36daec316c2f1c/diff:/var/lib/d
ocker/overlay2/10669751bc9b32f9dae2dfbff977817b218d8b62efdfd852669d984939337fc4/diff:/var/lib/docker/overlay2/c9826321b7cdee6e5767fcc25ffdb9f2b170dd88683deccec16775180472e052/diff:/var/lib/docker/overlay2/93fe86f96bbd8578686f5c6e85e468c67425a15bc3468fd6160bcf4b683f7ded/diff:/var/lib/docker/overlay2/22378b0a3177562c1dc57988573177acf03ee9353f251bd90424608f6609736f/diff:/var/lib/docker/overlay2/6f9a8de4c84b855e54278f112ef385b65cf7ce83137774bd447f581f931fdba8/diff:/var/lib/docker/overlay2/75929d4024047d79d1cb07e0aa4cbe999dcfe81d92a4f19bf4e934b7c749c777/diff:/var/lib/docker/overlay2/11747eb76a2c5d4e3e52e7791ccbb44129898ae37da84c5adb31930723991822/diff:/var/lib/docker/overlay2/3d0c322f0fbeca039eb0f2ace2e48a6556860edb13c31a68056d07f644b5947c/diff:/var/lib/docker/overlay2/37e5caf2125330a396059ef67db6dd7eeabbfcc3afd90b6364bbe13a2d4763ab/diff:/var/lib/docker/overlay2/7f66f473740d4034513069c7bd4de43269d2b328f058b3fbc64868409371fd53/diff:/var/lib/docker/overlay2/e7853ca89704ef21aa7014120bcc549c1259a5d8c3ef8a5932e2a095ef5
e8000/diff:/var/lib/docker/overlay2/236b2362f06a587e036fe0814a4a9f0a20f71d0bbd18b50ac3fcb17db425944b/diff:/var/lib/docker/overlay2/50076bcff37472720dbb36d9a3a48bb0432d6948a66b414369014ef78341f6bc/diff:/var/lib/docker/overlay2/f99fb67031aec99b950ed8054f90cd9baf7bcb83c4327c55617b11bba62f9d7a/diff:/var/lib/docker/overlay2/7f4f0cde1c3401952137a79e3dcde3c4ab23a17f6389d90215259d7431664326/diff:/var/lib/docker/overlay2/e9000629b4b1d18176f36ab8e78d978815141d81647473111b9a757aa4d55c64/diff:/var/lib/docker/overlay2/c75b32d5c68e353b0c46448c7980a0bb24c76734e38c27560b4da707e6dd5b6c/diff:/var/lib/docker/overlay2/9d95b640d4231740021ecd34b6875b2237ba49aba22355bacd06b1384cbbca01/diff:/var/lib/docker/overlay2/c67258bc5b10e43cb64f062847929271782f1774217621794cc512958d39874b/diff:/var/lib/docker/overlay2/10e2fa31b1491df452294985ea0c2b6d9e91bf8af6bc6d688fa4593f1fc089ad/diff:/var/lib/docker/overlay2/e3db9f814154a501672e127c2ef7363bb31a226f80a9d212f5cfdd9429fa486f/diff:/var/lib/docker/overlay2/866c15e4f6bd7392ddbc6f3e1eae5d8cc90fba
5017067d8c9133857eae97bdcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0710f70d253b477592260ac6e1f833bf80884eace408c5d34ec4e90d6dd7033c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0710f70d253b477592260ac6e1f833bf80884eace408c5d34ec4e90d6dd7033c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0710f70d253b477592260ac6e1f833bf80884eace408c5d34ec4e90d6dd7033c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-203818",
	                "Source": "/var/lib/docker/volumes/multinode-203818/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-203818",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-203818",
	                "name.minikube.sigs.k8s.io": "multinode-203818",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f2828382c223a71f95dba37b73f15772775537b090a673a7b09efb9e15eb574",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51341"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51342"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51343"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51344"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51345"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4f2828382c22",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-203818": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "551ffd7f0135",
	                        "multinode-203818"
	                    ],
	                    "NetworkID": "2a681e8ac8315327608e35ed99dffc53c4a0e844783de61b1c9a57c2f0e9b611",
	                    "EndpointID": "5d76c03ee3b4292c96d76eef2e04021408ee5e6534ca0827c7ade44eab2c77da",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-203818 -n multinode-203818
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 logs -n 25: (3.138129132s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-203818 cp multinode-203818-m02:/home/docker/cp-test.txt                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818:/home/docker/cp-test_multinode-203818-m02_multinode-203818.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n                                                                                                     | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n multinode-203818 sudo cat                                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | /home/docker/cp-test_multinode-203818-m02_multinode-203818.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-203818 cp multinode-203818-m02:/home/docker/cp-test.txt                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m03:/home/docker/cp-test_multinode-203818-m02_multinode-203818-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n                                                                                                     | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n multinode-203818-m03 sudo cat                                                                       | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | /home/docker/cp-test_multinode-203818-m02_multinode-203818-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-203818 cp testdata/cp-test.txt                                                                                    | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n                                                                                                     | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3396189668/001/cp-test_multinode-203818-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n                                                                                                     | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818:/home/docker/cp-test_multinode-203818-m03_multinode-203818.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n                                                                                                     | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n multinode-203818 sudo cat                                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | /home/docker/cp-test_multinode-203818-m03_multinode-203818.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt                                                           | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m02:/home/docker/cp-test_multinode-203818-m03_multinode-203818-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n                                                                                                     | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | multinode-203818-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-203818 ssh -n multinode-203818-m02 sudo cat                                                                       | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|         | /home/docker/cp-test_multinode-203818-m03_multinode-203818-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-203818 node stop m03                                                                                              | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	| node    | multinode-203818 node start                                                                                                 | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:41 PDT |
	|         | m03 --alsologtostderr                                                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-203818                                                                                                    | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:41 PDT |                     |
	| stop    | -p multinode-203818                                                                                                         | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:41 PDT | 25 Oct 22 20:41 PDT |
	| start   | -p multinode-203818                                                                                                         | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:41 PDT | 25 Oct 22 20:42 PDT |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-203818                                                                                                    | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:42 PDT |                     |
	| node    | multinode-203818 node delete                                                                                                | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:42 PDT | 25 Oct 22 20:43 PDT |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-203818 stop                                                                                                       | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:43 PDT | 25 Oct 22 20:43 PDT |
	| start   | -p multinode-203818                                                                                                         | multinode-203818 | jenkins | v1.27.1 | 25 Oct 22 20:43 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=docker                                                                                                             |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 20:43:36
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 20:43:36.797301    9122 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:43:36.797493    9122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:43:36.797498    9122 out.go:309] Setting ErrFile to fd 2...
	I1025 20:43:36.797502    9122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:43:36.797615    9122 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:43:36.798058    9122 out.go:303] Setting JSON to false
	I1025 20:43:36.812595    9122 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2585,"bootTime":1666753231,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:43:36.812705    9122 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:43:36.835738    9122 out.go:177] * [multinode-203818] minikube v1.27.1 on Darwin 12.6
	I1025 20:43:36.879617    9122 notify.go:220] Checking for updates...
	I1025 20:43:36.901235    9122 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:43:36.922539    9122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:43:36.944550    9122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:43:36.991233    9122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:43:37.012549    9122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:43:37.034916    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:43:37.035511    9122 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:43:37.102805    9122 docker.go:137] docker version: linux-20.10.17
	I1025 20:43:37.102942    9122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:43:37.230560    9122 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:47 SystemTime:2022-10-26 03:43:37.179691583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:43:37.274370    9122 out.go:177] * Using the docker driver based on existing profile
	I1025 20:43:37.296166    9122 start.go:282] selected driver: docker
	I1025 20:43:37.296209    9122 start.go:808] validating driver "docker" against &{Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:43:37.296456    9122 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:43:37.296680    9122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:43:37.427024    9122 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:47 SystemTime:2022-10-26 03:43:37.375162167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:43:37.429216    9122 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 20:43:37.429247    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:43:37.429254    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:43:37.429281    9122 start_flags.go:317] config:
	{Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registr
y-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:43:37.472854    9122 out.go:177] * Starting control plane node multinode-203818 in cluster multinode-203818
	I1025 20:43:37.493762    9122 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:43:37.516097    9122 out.go:177] * Pulling base image ...
	I1025 20:43:37.559861    9122 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:43:37.559935    9122 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:43:37.559962    9122 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 20:43:37.560021    9122 cache.go:57] Caching tarball of preloaded images
	I1025 20:43:37.560881    9122 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 20:43:37.560977    9122 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 20:43:37.561415    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:43:37.624892    9122 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:43:37.624911    9122 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:43:37.624926    9122 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:43:37.624970    9122 start.go:364] acquiring machines lock for multinode-203818: {Name:mk88e10ba1d84a7a598add48978caab9a0493783 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:43:37.625062    9122 start.go:368] acquired machines lock for "multinode-203818" in 55.292µs
	I1025 20:43:37.625081    9122 start.go:96] Skipping create...Using existing machine configuration
	I1025 20:43:37.625090    9122 fix.go:55] fixHost starting: 
	I1025 20:43:37.625347    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:43:37.686750    9122 fix.go:103] recreateIfNeeded on multinode-203818: state=Stopped err=<nil>
	W1025 20:43:37.686786    9122 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 20:43:37.708893    9122 out.go:177] * Restarting existing docker container for "multinode-203818" ...
	I1025 20:43:37.730490    9122 cli_runner.go:164] Run: docker start multinode-203818
	I1025 20:43:38.065539    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:43:38.129208    9122 kic.go:415] container "multinode-203818" state is running.
	I1025 20:43:38.129785    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:43:38.239205    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:43:38.239606    9122 machine.go:88] provisioning docker machine ...
	I1025 20:43:38.239629    9122 ubuntu.go:169] provisioning hostname "multinode-203818"
	I1025 20:43:38.239720    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:38.304990    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:38.305182    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:38.305195    9122 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-203818 && echo "multinode-203818" | sudo tee /etc/hostname
	I1025 20:43:38.444923    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-203818
	
	I1025 20:43:38.445006    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:38.510155    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:38.510312    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:38.510326    9122 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-203818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-203818/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-203818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 20:43:38.631522    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:43:38.631547    9122 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/14956-2080/.minikube CaCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/14956-2080/.minikube}
	I1025 20:43:38.631580    9122 ubuntu.go:177] setting up certificates
	I1025 20:43:38.631588    9122 provision.go:83] configureAuth start
	I1025 20:43:38.631648    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:43:38.697977    9122 provision.go:138] copyHostCerts
	I1025 20:43:38.698021    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:43:38.698088    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem, removing ...
	I1025 20:43:38.698098    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:43:38.698197    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem (1078 bytes)
	I1025 20:43:38.698393    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:43:38.698425    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem, removing ...
	I1025 20:43:38.698430    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:43:38.698490    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem (1123 bytes)
	I1025 20:43:38.698599    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:43:38.698629    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem, removing ...
	I1025 20:43:38.698633    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:43:38.698691    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem (1679 bytes)
	I1025 20:43:38.698820    9122 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem org=jenkins.multinode-203818 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-203818]
	I1025 20:43:38.920081    9122 provision.go:172] copyRemoteCerts
	I1025 20:43:38.920143    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 20:43:38.920190    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:38.986346    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:39.075651    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 20:43:39.075765    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 20:43:39.092201    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 20:43:39.092269    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 20:43:39.108732    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 20:43:39.108790    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 20:43:39.124975    9122 provision.go:86] duration metric: configureAuth took 493.374297ms
	I1025 20:43:39.124987    9122 ubuntu.go:193] setting minikube options for container-runtime
	I1025 20:43:39.125145    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:43:39.125204    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.188102    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:39.188237    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:39.188246    9122 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 20:43:39.316932    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 20:43:39.316947    9122 ubuntu.go:71] root file system type: overlay
	I1025 20:43:39.317079    9122 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 20:43:39.317144    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.380637    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:39.380818    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:39.380865    9122 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 20:43:39.518577    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 20:43:39.518648    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.580675    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:43:39.580814    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51341 <nil> <nil>}
	I1025 20:43:39.580827    9122 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 20:43:39.715195    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:43:39.715213    9122 machine.go:91] provisioned docker machine in 1.475598823s
	I1025 20:43:39.715222    9122 start.go:300] post-start starting for "multinode-203818" (driver="docker")
	I1025 20:43:39.715228    9122 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 20:43:39.715309    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 20:43:39.715355    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.777880    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:39.867298    9122 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 20:43:39.870682    9122 command_runner.go:130] > NAME="Ubuntu"
	I1025 20:43:39.870694    9122 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1025 20:43:39.870698    9122 command_runner.go:130] > ID=ubuntu
	I1025 20:43:39.870702    9122 command_runner.go:130] > ID_LIKE=debian
	I1025 20:43:39.870706    9122 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1025 20:43:39.870709    9122 command_runner.go:130] > VERSION_ID="20.04"
	I1025 20:43:39.870713    9122 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 20:43:39.870717    9122 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 20:43:39.870722    9122 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 20:43:39.870729    9122 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 20:43:39.870733    9122 command_runner.go:130] > VERSION_CODENAME=focal
	I1025 20:43:39.870740    9122 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1025 20:43:39.870782    9122 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 20:43:39.870794    9122 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 20:43:39.870806    9122 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 20:43:39.870811    9122 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 20:43:39.870818    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/addons for local assets ...
	I1025 20:43:39.870910    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/files for local assets ...
	I1025 20:43:39.871059    9122 filesync.go:149] local asset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> 29162.pem in /etc/ssl/certs
	I1025 20:43:39.871064    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /etc/ssl/certs/29162.pem
	I1025 20:43:39.871200    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 20:43:39.877925    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:43:39.894568    9122 start.go:303] post-start completed in 179.336217ms
	I1025 20:43:39.894636    9122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:43:39.894679    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:39.956826    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:40.049863    9122 command_runner.go:130] > 6%!
	(MISSING)I1025 20:43:40.049929    9122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:43:40.054081    9122 command_runner.go:130] > 92G
	I1025 20:43:40.054431    9122 fix.go:57] fixHost completed within 2.42934018s
	I1025 20:43:40.054442    9122 start.go:83] releasing machines lock for "multinode-203818", held for 2.429369424s
	I1025 20:43:40.054516    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:43:40.117705    9122 ssh_runner.go:195] Run: systemctl --version
	I1025 20:43:40.117706    9122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 20:43:40.117767    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:40.117804    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:40.184020    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:40.184253    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:43:40.316302    9122 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 20:43:40.316335    9122 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I1025 20:43:40.316354    9122 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I1025 20:43:40.316481    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 20:43:40.323968    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1025 20:43:40.336177    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:43:40.403011    9122 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 20:43:40.482035    9122 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 20:43:40.490847    9122 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 20:43:40.490858    9122 command_runner.go:130] > [Unit]
	I1025 20:43:40.490868    9122 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 20:43:40.490873    9122 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 20:43:40.490877    9122 command_runner.go:130] > BindsTo=containerd.service
	I1025 20:43:40.490885    9122 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 20:43:40.490890    9122 command_runner.go:130] > Wants=network-online.target
	I1025 20:43:40.490896    9122 command_runner.go:130] > Requires=docker.socket
	I1025 20:43:40.490900    9122 command_runner.go:130] > StartLimitBurst=3
	I1025 20:43:40.490903    9122 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 20:43:40.490906    9122 command_runner.go:130] > [Service]
	I1025 20:43:40.490909    9122 command_runner.go:130] > Type=notify
	I1025 20:43:40.490912    9122 command_runner.go:130] > Restart=on-failure
	I1025 20:43:40.490919    9122 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 20:43:40.490927    9122 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 20:43:40.490933    9122 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 20:43:40.490939    9122 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 20:43:40.490945    9122 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 20:43:40.490952    9122 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 20:43:40.490959    9122 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 20:43:40.490972    9122 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 20:43:40.490979    9122 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 20:43:40.490983    9122 command_runner.go:130] > ExecStart=
	I1025 20:43:40.490994    9122 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 20:43:40.490999    9122 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 20:43:40.491006    9122 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 20:43:40.491011    9122 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 20:43:40.491014    9122 command_runner.go:130] > LimitNOFILE=infinity
	I1025 20:43:40.491018    9122 command_runner.go:130] > LimitNPROC=infinity
	I1025 20:43:40.491035    9122 command_runner.go:130] > LimitCORE=infinity
	I1025 20:43:40.491044    9122 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 20:43:40.491050    9122 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 20:43:40.491053    9122 command_runner.go:130] > TasksMax=infinity
	I1025 20:43:40.491057    9122 command_runner.go:130] > TimeoutStartSec=0
	I1025 20:43:40.491062    9122 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 20:43:40.491070    9122 command_runner.go:130] > Delegate=yes
	I1025 20:43:40.491078    9122 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 20:43:40.491081    9122 command_runner.go:130] > KillMode=process
	I1025 20:43:40.491089    9122 command_runner.go:130] > [Install]
	I1025 20:43:40.491093    9122 command_runner.go:130] > WantedBy=multi-user.target
	I1025 20:43:40.491402    9122 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 20:43:40.491456    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 20:43:40.500811    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 20:43:40.512645    9122 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:43:40.512656    9122 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:43:40.513547    9122 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 20:43:40.580296    9122 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 20:43:40.648806    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:43:40.714674    9122 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 20:43:40.953289    9122 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 20:43:41.019114    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:43:41.084840    9122 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 20:43:41.093931    9122 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 20:43:41.093997    9122 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 20:43:41.097488    9122 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 20:43:41.097498    9122 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 20:43:41.097502    9122 command_runner.go:130] > Device: 97h/151d	Inode: 115         Links: 1
	I1025 20:43:41.097507    9122 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 20:43:41.097513    9122 command_runner.go:130] > Access: 2022-10-26 03:43:40.411250876 +0000
	I1025 20:43:41.097517    9122 command_runner.go:130] > Modify: 2022-10-26 03:43:40.411250876 +0000
	I1025 20:43:41.097522    9122 command_runner.go:130] > Change: 2022-10-26 03:43:40.412250876 +0000
	I1025 20:43:41.097525    9122 command_runner.go:130] >  Birth: -
	I1025 20:43:41.097649    9122 start.go:472] Will wait 60s for crictl version
	I1025 20:43:41.097690    9122 ssh_runner.go:195] Run: sudo crictl version
	I1025 20:43:41.123286    9122 command_runner.go:130] > Version:  0.1.0
	I1025 20:43:41.123297    9122 command_runner.go:130] > RuntimeName:  docker
	I1025 20:43:41.123301    9122 command_runner.go:130] > RuntimeVersion:  20.10.18
	I1025 20:43:41.123316    9122 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1025 20:43:41.125594    9122 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 20:43:41.125654    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:43:41.150619    9122 command_runner.go:130] > 20.10.18
	I1025 20:43:41.152806    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:43:41.177603    9122 command_runner.go:130] > 20.10.18
	I1025 20:43:41.225401    9122 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 20:43:41.225613    9122 cli_runner.go:164] Run: docker exec -t multinode-203818 dig +short host.docker.internal
	I1025 20:43:41.344949    9122 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 20:43:41.345051    9122 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 20:43:41.349231    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:43:41.358537    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:41.421091    9122 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:43:41.421171    9122 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 20:43:41.442027    9122 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1025 20:43:41.442045    9122 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1025 20:43:41.442050    9122 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1025 20:43:41.442063    9122 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1025 20:43:41.442069    9122 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1025 20:43:41.442075    9122 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1025 20:43:41.442081    9122 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1025 20:43:41.442088    9122 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1025 20:43:41.442092    9122 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1025 20:43:41.442096    9122 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 20:43:41.442101    9122 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1025 20:43:41.444156    9122 docker.go:612] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1025 20:43:41.444172    9122 docker.go:543] Images already preloaded, skipping extraction
	I1025 20:43:41.444241    9122 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 20:43:41.463199    9122 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1025 20:43:41.463214    9122 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1025 20:43:41.463224    9122 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1025 20:43:41.463230    9122 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1025 20:43:41.463235    9122 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1025 20:43:41.463240    9122 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1025 20:43:41.463243    9122 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1025 20:43:41.463249    9122 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1025 20:43:41.463254    9122 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1025 20:43:41.463258    9122 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 20:43:41.463261    9122 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1025 20:43:41.465231    9122 docker.go:612] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1025 20:43:41.465249    9122 cache_images.go:84] Images are preloaded, skipping loading
	I1025 20:43:41.465324    9122 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 20:43:41.528201    9122 command_runner.go:130] > systemd
	I1025 20:43:41.530250    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:43:41.530265    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:43:41.530300    9122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 20:43:41.530317    9122 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-203818 NodeName:multinode-203818 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 20:43:41.530436    9122 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-203818"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 20:43:41.530534    9122 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-203818 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 20:43:41.530599    9122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 20:43:41.537380    9122 command_runner.go:130] > kubeadm
	I1025 20:43:41.537395    9122 command_runner.go:130] > kubectl
	I1025 20:43:41.537402    9122 command_runner.go:130] > kubelet
	I1025 20:43:41.538254    9122 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 20:43:41.538303    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 20:43:41.545156    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I1025 20:43:41.557366    9122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 20:43:41.569642    9122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I1025 20:43:41.581820    9122 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 20:43:41.585658    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:43:41.594782    9122 certs.go:54] Setting up /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818 for IP: 192.168.58.2
	I1025 20:43:41.594881    9122 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key
	I1025 20:43:41.594927    9122 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key
	I1025 20:43:41.595008    9122 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key
	I1025 20:43:41.595062    9122 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.key.cee25041
	I1025 20:43:41.595115    9122 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.key
	I1025 20:43:41.595122    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 20:43:41.595140    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 20:43:41.595154    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 20:43:41.595169    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 20:43:41.595183    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 20:43:41.595198    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 20:43:41.595211    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 20:43:41.595226    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 20:43:41.595328    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem (1338 bytes)
	W1025 20:43:41.595362    9122 certs.go:384] ignoring /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916_empty.pem, impossibly tiny 0 bytes
	I1025 20:43:41.595369    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 20:43:41.595398    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem (1078 bytes)
	I1025 20:43:41.595426    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem (1123 bytes)
	I1025 20:43:41.595450    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem (1679 bytes)
	I1025 20:43:41.595515    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:43:41.595545    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem -> /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.595561    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.595575    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.596017    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 20:43:41.612413    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 20:43:41.629316    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 20:43:41.646225    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 20:43:41.662889    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 20:43:41.679620    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 20:43:41.696067    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 20:43:41.712685    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 20:43:41.729013    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem --> /usr/share/ca-certificates/2916.pem (1338 bytes)
	I1025 20:43:41.745682    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /usr/share/ca-certificates/29162.pem (1708 bytes)
	I1025 20:43:41.761943    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 20:43:41.778134    9122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 20:43:41.791312    9122 ssh_runner.go:195] Run: openssl version
	I1025 20:43:41.796102    9122 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1025 20:43:41.796312    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 20:43:41.804092    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.824272    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.824421    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.824464    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:43:41.829180    9122 command_runner.go:130] > b5213941
	I1025 20:43:41.829376    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 20:43:41.836757    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2916.pem && ln -fs /usr/share/ca-certificates/2916.pem /etc/ssl/certs/2916.pem"
	I1025 20:43:41.844364    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.847905    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.848011    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.848048    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2916.pem
	I1025 20:43:41.852725    9122 command_runner.go:130] > 51391683
	I1025 20:43:41.853102    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2916.pem /etc/ssl/certs/51391683.0"
	I1025 20:43:41.860674    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29162.pem && ln -fs /usr/share/ca-certificates/29162.pem /etc/ssl/certs/29162.pem"
	I1025 20:43:41.868283    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.871936    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.872081    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.872123    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29162.pem
	I1025 20:43:41.877007    9122 command_runner.go:130] > 3ec20f2e
	I1025 20:43:41.877337    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29162.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 20:43:41.884680    9122 kubeadm.go:396] StartCluster: {Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:43:41.884786    9122 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 20:43:41.905710    9122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 20:43:41.912543    9122 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1025 20:43:41.912555    9122 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1025 20:43:41.912560    9122 command_runner.go:130] > /var/lib/minikube/etcd:
	I1025 20:43:41.912564    9122 command_runner.go:130] > member
	I1025 20:43:41.913350    9122 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1025 20:43:41.913363    9122 kubeadm.go:627] restartCluster start
	I1025 20:43:41.913401    9122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 20:43:41.919901    9122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:41.919960    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:43:41.984585    9122 kubeconfig.go:135] verify returned: extract IP: "multinode-203818" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:43:41.984694    9122 kubeconfig.go:146] "multinode-203818" context is missing from /Users/jenkins/minikube-integration/14956-2080/kubeconfig - will repair!
	I1025 20:43:41.984907    9122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/kubeconfig: {Name:mke147bd0f9c02680989e4cfb1c572f71a0430b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:43:41.985383    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:43:41.985562    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:43:41.985845    9122 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 20:43:41.986018    9122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 20:43:41.993770    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:41.993835    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.002126    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.204259    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.204447    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.215734    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.404272    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.404480    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.415415    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.604273    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.604449    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.614604    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:42.804287    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:42.804524    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:42.815730    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.004270    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.004421    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.015319    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.204300    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.204451    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.215085    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.404292    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.404451    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.415218    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.604276    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.604422    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.615380    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:43.804391    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:43.804481    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:43.814752    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.004285    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.004483    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.015694    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.204068    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.204246    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.214623    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.404410    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.404509    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.414514    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.604238    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.604447    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.615090    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:44.804212    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:44.804384    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:44.815087    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.004239    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:45.004378    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:45.015089    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.015098    9122 api_server.go:165] Checking apiserver status ...
	I1025 20:43:45.015139    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 20:43:45.022900    9122 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.022911    9122 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1025 20:43:45.022917    9122 kubeadm.go:1114] stopping kube-system containers ...
	I1025 20:43:45.022975    9122 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 20:43:45.045339    9122 command_runner.go:130] > a76713468a8e
	I1025 20:43:45.045351    9122 command_runner.go:130] > bf7b5ebb864d
	I1025 20:43:45.045355    9122 command_runner.go:130] > 6e75fc801378
	I1025 20:43:45.045358    9122 command_runner.go:130] > c5b570db3f97
	I1025 20:43:45.045361    9122 command_runner.go:130] > c08d84877f86
	I1025 20:43:45.045364    9122 command_runner.go:130] > d412a631e4ae
	I1025 20:43:45.045367    9122 command_runner.go:130] > 901030c09673
	I1025 20:43:45.045371    9122 command_runner.go:130] > fa258b141e90
	I1025 20:43:45.045376    9122 command_runner.go:130] > 3494771f98f1
	I1025 20:43:45.045381    9122 command_runner.go:130] > acf347f03ed9
	I1025 20:43:45.045385    9122 command_runner.go:130] > c0ffc4ed686c
	I1025 20:43:45.045388    9122 command_runner.go:130] > 29a55c918cc0
	I1025 20:43:45.045391    9122 command_runner.go:130] > 6578e02f60a4
	I1025 20:43:45.045394    9122 command_runner.go:130] > 34b369462e06
	I1025 20:43:45.045398    9122 command_runner.go:130] > aa702be3519c
	I1025 20:43:45.045402    9122 command_runner.go:130] > 6e35a55843e1
	I1025 20:43:45.045407    9122 command_runner.go:130] > 67c78a683e4d
	I1025 20:43:45.045416    9122 command_runner.go:130] > 7f82edcd8e10
	I1025 20:43:45.045420    9122 command_runner.go:130] > 66606cdef38a
	I1025 20:43:45.045436    9122 command_runner.go:130] > b2980ae0c352
	I1025 20:43:45.045440    9122 command_runner.go:130] > d6944494206f
	I1025 20:43:45.045443    9122 command_runner.go:130] > 01e7c971f29b
	I1025 20:43:45.045446    9122 command_runner.go:130] > ed3dab775831
	I1025 20:43:45.045449    9122 command_runner.go:130] > 113916e4ec18
	I1025 20:43:45.045452    9122 command_runner.go:130] > d8ecd8887c5d
	I1025 20:43:45.045456    9122 command_runner.go:130] > 87ee196d2cd9
	I1025 20:43:45.045459    9122 command_runner.go:130] > 6b8aa122335e
	I1025 20:43:45.045462    9122 command_runner.go:130] > 311c6f77b2dd
	I1025 20:43:45.045466    9122 command_runner.go:130] > e12714297d31
	I1025 20:43:45.045469    9122 command_runner.go:130] > 2ec5e29e095a
	I1025 20:43:45.045472    9122 command_runner.go:130] > b79b4f06c21d
	I1025 20:43:45.045476    9122 command_runner.go:130] > e8f6e8673bc0
	I1025 20:43:45.047546    9122 docker.go:444] Stopping containers: [a76713468a8e bf7b5ebb864d 6e75fc801378 c5b570db3f97 c08d84877f86 d412a631e4ae 901030c09673 fa258b141e90 3494771f98f1 acf347f03ed9 c0ffc4ed686c 29a55c918cc0 6578e02f60a4 34b369462e06 aa702be3519c 6e35a55843e1 67c78a683e4d 7f82edcd8e10 66606cdef38a b2980ae0c352 d6944494206f 01e7c971f29b ed3dab775831 113916e4ec18 d8ecd8887c5d 87ee196d2cd9 6b8aa122335e 311c6f77b2dd e12714297d31 2ec5e29e095a b79b4f06c21d e8f6e8673bc0]
	I1025 20:43:45.047617    9122 ssh_runner.go:195] Run: docker stop a76713468a8e bf7b5ebb864d 6e75fc801378 c5b570db3f97 c08d84877f86 d412a631e4ae 901030c09673 fa258b141e90 3494771f98f1 acf347f03ed9 c0ffc4ed686c 29a55c918cc0 6578e02f60a4 34b369462e06 aa702be3519c 6e35a55843e1 67c78a683e4d 7f82edcd8e10 66606cdef38a b2980ae0c352 d6944494206f 01e7c971f29b ed3dab775831 113916e4ec18 d8ecd8887c5d 87ee196d2cd9 6b8aa122335e 311c6f77b2dd e12714297d31 2ec5e29e095a b79b4f06c21d e8f6e8673bc0
	I1025 20:43:45.069687    9122 command_runner.go:130] > a76713468a8e
	I1025 20:43:45.069714    9122 command_runner.go:130] > bf7b5ebb864d
	I1025 20:43:45.070077    9122 command_runner.go:130] > 6e75fc801378
	I1025 20:43:45.070084    9122 command_runner.go:130] > c5b570db3f97
	I1025 20:43:45.070089    9122 command_runner.go:130] > c08d84877f86
	I1025 20:43:45.070097    9122 command_runner.go:130] > d412a631e4ae
	I1025 20:43:45.070708    9122 command_runner.go:130] > 901030c09673
	I1025 20:43:45.070714    9122 command_runner.go:130] > fa258b141e90
	I1025 20:43:45.070717    9122 command_runner.go:130] > 3494771f98f1
	I1025 20:43:45.070722    9122 command_runner.go:130] > acf347f03ed9
	I1025 20:43:45.070726    9122 command_runner.go:130] > c0ffc4ed686c
	I1025 20:43:45.070729    9122 command_runner.go:130] > 29a55c918cc0
	I1025 20:43:45.070733    9122 command_runner.go:130] > 6578e02f60a4
	I1025 20:43:45.070736    9122 command_runner.go:130] > 34b369462e06
	I1025 20:43:45.070740    9122 command_runner.go:130] > aa702be3519c
	I1025 20:43:45.070743    9122 command_runner.go:130] > 6e35a55843e1
	I1025 20:43:45.070747    9122 command_runner.go:130] > 67c78a683e4d
	I1025 20:43:45.070750    9122 command_runner.go:130] > 7f82edcd8e10
	I1025 20:43:45.070754    9122 command_runner.go:130] > 66606cdef38a
	I1025 20:43:45.070758    9122 command_runner.go:130] > b2980ae0c352
	I1025 20:43:45.070762    9122 command_runner.go:130] > d6944494206f
	I1025 20:43:45.070765    9122 command_runner.go:130] > 01e7c971f29b
	I1025 20:43:45.070769    9122 command_runner.go:130] > ed3dab775831
	I1025 20:43:45.070772    9122 command_runner.go:130] > 113916e4ec18
	I1025 20:43:45.070776    9122 command_runner.go:130] > d8ecd8887c5d
	I1025 20:43:45.070783    9122 command_runner.go:130] > 87ee196d2cd9
	I1025 20:43:45.070787    9122 command_runner.go:130] > 6b8aa122335e
	I1025 20:43:45.070791    9122 command_runner.go:130] > 311c6f77b2dd
	I1025 20:43:45.070795    9122 command_runner.go:130] > e12714297d31
	I1025 20:43:45.070798    9122 command_runner.go:130] > 2ec5e29e095a
	I1025 20:43:45.070801    9122 command_runner.go:130] > b79b4f06c21d
	I1025 20:43:45.070806    9122 command_runner.go:130] > e8f6e8673bc0
	I1025 20:43:45.073246    9122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 20:43:45.082964    9122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 20:43:45.089843    9122 command_runner.go:130] > -rw------- 1 root root 5643 Oct 26 03:38 /etc/kubernetes/admin.conf
	I1025 20:43:45.089863    9122 command_runner.go:130] > -rw------- 1 root root 5652 Oct 26 03:41 /etc/kubernetes/controller-manager.conf
	I1025 20:43:45.089870    9122 command_runner.go:130] > -rw------- 1 root root 2003 Oct 26 03:38 /etc/kubernetes/kubelet.conf
	I1025 20:43:45.089877    9122 command_runner.go:130] > -rw------- 1 root root 5600 Oct 26 03:41 /etc/kubernetes/scheduler.conf
	I1025 20:43:45.090504    9122 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 26 03:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 26 03:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Oct 26 03:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 26 03:41 /etc/kubernetes/scheduler.conf
	
	I1025 20:43:45.090555    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 20:43:45.096749    9122 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1025 20:43:45.097416    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 20:43:45.103841    9122 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1025 20:43:45.104474    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 20:43:45.111377    9122 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.111425    9122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 20:43:45.117786    9122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 20:43:45.124321    9122 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:43:45.124361    9122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 20:43:45.130791    9122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 20:43:45.137836    9122 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 20:43:45.137850    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:45.177937    9122 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 20:43:45.177950    9122 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1025 20:43:45.177964    9122 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1025 20:43:45.177971    9122 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 20:43:45.177980    9122 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1025 20:43:45.177987    9122 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1025 20:43:45.178228    9122 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1025 20:43:45.178392    9122 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1025 20:43:45.178685    9122 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1025 20:43:45.178845    9122 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 20:43:45.179211    9122 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 20:43:45.179708    9122 command_runner.go:130] > [certs] Using the existing "sa" key
	I1025 20:43:45.182267    9122 command_runner.go:130] ! W1026 03:43:45.177650    1166 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:45.182282    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:45.222772    9122 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 20:43:45.376929    9122 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1025 20:43:45.491773    9122 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1025 20:43:45.579278    9122 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 20:43:45.908244    9122 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 20:43:45.912776    9122 command_runner.go:130] ! W1026 03:43:45.223201    1175 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:45.912796    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:45.964483    9122 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 20:43:45.964990    9122 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 20:43:45.964999    9122 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 20:43:46.039526    9122 command_runner.go:130] ! W1026 03:43:45.955639    1198 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:46.039552    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:46.078197    9122 command_runner.go:130] ! W1026 03:43:46.082911    1233 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:46.089924    9122 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 20:43:46.089937    9122 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 20:43:46.089942    9122 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 20:43:46.089948    9122 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 20:43:46.089962    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:46.169856    9122 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 20:43:46.175403    9122 command_runner.go:130] ! W1026 03:43:46.169866    1246 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:46.175426    9122 api_server.go:51] waiting for apiserver process to appear ...
	I1025 20:43:46.175470    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:46.690790    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:47.189243    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:47.688717    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:48.189268    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:43:48.199232    9122 command_runner.go:130] > 1821
	I1025 20:43:48.200039    9122 api_server.go:71] duration metric: took 2.024610468s to wait for apiserver process to appear ...
	I1025 20:43:48.200060    9122 api_server.go:87] waiting for apiserver healthz status ...
	I1025 20:43:48.200088    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:50.534225    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 20:43:50.534242    9122 api_server.go:102] status: https://127.0.0.1:51345/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 20:43:51.034939    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:51.042629    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 20:43:51.042650    9122 api_server.go:102] status: https://127.0.0.1:51345/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 20:43:51.534457    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:51.540222    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 20:43:51.540236    9122 api_server.go:102] status: https://127.0.0.1:51345/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 20:43:52.036395    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:43:52.043580    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 200:
	ok
	I1025 20:43:52.043641    9122 round_trippers.go:463] GET https://127.0.0.1:51345/version
	I1025 20:43:52.043649    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:52.043658    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:52.043670    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:52.049885    9122 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1025 20:43:52.049894    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:52.049900    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:52.049905    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:52.049913    9122 round_trippers.go:580]     Content-Length: 263
	I1025 20:43:52.049918    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:52 GMT
	I1025 20:43:52.049922    9122 round_trippers.go:580]     Audit-Id: c4eba5a2-7038-404b-a89e-7e6dd65fcffc
	I1025 20:43:52.049927    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:52.049932    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:52.049949    9122 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 20:43:52.049993    9122 api_server.go:140] control plane version: v1.25.3
	I1025 20:43:52.050000    9122 api_server.go:130] duration metric: took 3.849933686s to wait for apiserver health ...
	I1025 20:43:52.050016    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:43:52.050024    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:43:52.071610    9122 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 20:43:52.092506    9122 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 20:43:52.098345    9122 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 20:43:52.098356    9122 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I1025 20:43:52.098361    9122 command_runner.go:130] > Device: 8eh/142d	Inode: 1185203     Links: 1
	I1025 20:43:52.098366    9122 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 20:43:52.098372    9122 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I1025 20:43:52.098377    9122 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I1025 20:43:52.098381    9122 command_runner.go:130] > Change: 2022-10-26 03:18:20.497780245 +0000
	I1025 20:43:52.098384    9122 command_runner.go:130] >  Birth: -
	I1025 20:43:52.098411    9122 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 20:43:52.098418    9122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1025 20:43:52.113650    9122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 20:43:52.990277    9122 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1025 20:43:52.991914    9122 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1025 20:43:52.994083    9122 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1025 20:43:53.011276    9122 command_runner.go:130] > daemonset.apps/kindnet configured
	I1025 20:43:53.056810    9122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 20:43:53.056909    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:43:53.056921    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.056929    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.056937    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.062417    9122 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 20:43:53.062435    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.062442    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.062448    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.062461    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.062469    9122 round_trippers.go:580]     Audit-Id: 3431891b-4a28-47d9-907c-16c0df9e0448
	I1025 20:43:53.062475    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.062481    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.063966    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"963"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"736","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85214 chars]
	I1025 20:43:53.067470    9122 system_pods.go:59] 12 kube-system pods found
	I1025 20:43:53.067677    9122 system_pods.go:61] "coredns-565d847f94-tvhv6" [c89eabb7-66d0-469a-8966-ceeb6f9b215e] Running
	I1025 20:43:53.067686    9122 system_pods.go:61] "etcd-multinode-203818" [49b2d2ea-40ad-40fa-bab3-93930d3e9d10] Running
	I1025 20:43:53.067695    9122 system_pods.go:61] "kindnet-8xvrw" [a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07] Running
	I1025 20:43:53.067701    9122 system_pods.go:61] "kindnet-l9tx2" [0bc050f8-3916-4ad8-9eca-ec2de9c7c4d9] Running
	I1025 20:43:53.067710    9122 system_pods.go:61] "kindnet-q9qv5" [d5252527-eabb-4b78-9901-bfb15f51fc1b] Running
	I1025 20:43:53.067716    9122 system_pods.go:61] "kube-apiserver-multinode-203818" [e95d0701-3478-4373-8740-541b9481b83a] Running
	I1025 20:43:53.067743    9122 system_pods.go:61] "kube-controller-manager-multinode-203818" [cade2617-19dd-49f7-940e-d92e7b847fb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 20:43:53.067755    9122 system_pods.go:61] "kube-proxy-48p2l" [cf96a572-bbca-4af2-bd3e-7d377772cef4] Running
	I1025 20:43:53.067767    9122 system_pods.go:61] "kube-proxy-9j45q" [f3494f97-7b4b-4072-83ad-9a8308ed6c9b] Running
	I1025 20:43:53.067773    9122 system_pods.go:61] "kube-proxy-j799s" [281b0817-ab50-4c73-b20e-0774fcc2f594] Running
	I1025 20:43:53.067778    9122 system_pods.go:61] "kube-scheduler-multinode-203818" [352db6de-72fe-4aaa-b7b7-79881ea11d8e] Running
	I1025 20:43:53.067784    9122 system_pods.go:61] "storage-provisioner" [93c13130-1e73-4433-b82f-b565797df5c6] Running
	I1025 20:43:53.067789    9122 system_pods.go:74] duration metric: took 10.963464ms to wait for pod list to return data ...
	I1025 20:43:53.067798    9122 node_conditions.go:102] verifying NodePressure condition ...
	I1025 20:43:53.067893    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes
	I1025 20:43:53.067900    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.067908    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.067916    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.072655    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:43:53.072670    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.072676    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.072680    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.072684    9122 round_trippers.go:580]     Audit-Id: a6bc24c7-01c1-4fa1-8d3c-2039042e9cd9
	I1025 20:43:53.072688    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.072693    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.072697    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.072779    9122 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"967"},"items":[{"metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10904 chars]
	I1025 20:43:53.073263    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:43:53.073275    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:43:53.073286    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:43:53.073289    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:43:53.073293    9122 node_conditions.go:105] duration metric: took 5.491155ms to run NodePressure ...
	I1025 20:43:53.073311    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 20:43:53.286253    9122 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1025 20:43:53.363042    9122 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1025 20:43:53.366573    9122 command_runner.go:130] ! W1026 03:43:53.178030    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:43:53.366597    9122 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1025 20:43:53.366648    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1025 20:43:53.366653    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.366660    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.366665    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.369944    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.369959    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.369966    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.369972    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.369981    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.369991    9122 round_trippers.go:580]     Audit-Id: 2a3fa272-d820-41a4-affd-f8c87d65facc
	I1025 20:43:53.370002    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.370023    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.370411    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"978"},"items":[{"metadata":{"name":"etcd-multinode-203818","namespace":"kube-system","uid":"49b2d2ea-40ad-40fa-bab3-93930d3e9d10","resourceVersion":"755","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.mirror":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.seen":"2022-10-26T03:38:46.168169599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30656 chars]
	I1025 20:43:53.371145    9122 kubeadm.go:778] kubelet initialised
	I1025 20:43:53.371154    9122 kubeadm.go:779] duration metric: took 4.550039ms waiting for restarted kubelet to initialise ...
	I1025 20:43:53.371162    9122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:43:53.371206    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:43:53.371212    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.371218    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.371224    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.375030    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.375061    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.375085    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.375100    9122 round_trippers.go:580]     Audit-Id: 8b143cb5-388f-464d-ab9e-60d90887fd97
	I1025 20:43:53.375108    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.375116    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.375121    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.375126    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.376208    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"978"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"736","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85422 chars]
	I1025 20:43:53.378159    9122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.378205    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/coredns-565d847f94-tvhv6
	I1025 20:43:53.378209    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.378216    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.378221    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.380399    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.380412    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.380418    9122 round_trippers.go:580]     Audit-Id: 4fdb392f-304a-4596-a459-e6158d8b61c7
	I1025 20:43:53.380422    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.380427    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.380431    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.380435    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.380444    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.380510    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"736","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6550 chars]
	I1025 20:43:53.380822    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.380828    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.380834    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.380839    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.383196    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.383214    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.383223    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.383228    9122 round_trippers.go:580]     Audit-Id: ca0c1a4d-14d6-4384-a917-cf9615c00f84
	I1025 20:43:53.383234    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.383242    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.383248    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.383252    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.383325    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.383549    9122 pod_ready.go:92] pod "coredns-565d847f94-tvhv6" in "kube-system" namespace has status "Ready":"True"
	I1025 20:43:53.383558    9122 pod_ready.go:81] duration metric: took 5.387904ms waiting for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.383565    9122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.383602    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/etcd-multinode-203818
	I1025 20:43:53.383607    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.383612    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.383617    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.386546    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.386564    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.386575    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.386581    9122 round_trippers.go:580]     Audit-Id: 78d854a5-38e6-4a8f-8810-f7171a549d85
	I1025 20:43:53.386587    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.386593    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.386597    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.386602    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.386811    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-203818","namespace":"kube-system","uid":"49b2d2ea-40ad-40fa-bab3-93930d3e9d10","resourceVersion":"755","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.mirror":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.seen":"2022-10-26T03:38:46.168169599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I1025 20:43:53.387057    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.387063    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.387070    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.387075    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.390084    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.390098    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.390104    9122 round_trippers.go:580]     Audit-Id: ec50719b-7494-4fba-b10b-1d01c774bc65
	I1025 20:43:53.390115    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.390121    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.390152    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.390160    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.390171    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.390226    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.390460    9122 pod_ready.go:92] pod "etcd-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:43:53.390468    9122 pod_ready.go:81] duration metric: took 6.89885ms waiting for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.390479    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.390514    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-203818
	I1025 20:43:53.390519    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.390525    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.390530    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.392746    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.392759    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.392767    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.392772    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.392777    9122 round_trippers.go:580]     Audit-Id: 0ba34327-c5b6-4453-b9c9-31d0c71759dd
	I1025 20:43:53.392785    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.392792    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.392796    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.392854    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-203818","namespace":"kube-system","uid":"e95d0701-3478-4373-8740-541b9481b83a","resourceVersion":"770","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.mirror":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.seen":"2022-10-26T03:38:46.168180019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I1025 20:43:53.393137    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.393144    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.393149    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.393154    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.395956    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.395967    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.395973    9122 round_trippers.go:580]     Audit-Id: 29b0b768-5e72-4f53-946d-7cf1f1365fbf
	I1025 20:43:53.395978    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.395983    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.395990    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.395995    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.395999    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.396054    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.396271    9122 pod_ready.go:92] pod "kube-apiserver-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:43:53.396280    9122 pod_ready.go:81] duration metric: took 5.796829ms waiting for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.396288    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:43:53.396326    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:53.396332    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.396340    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.396347    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.398938    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.398949    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.398955    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.398959    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.398964    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.398970    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.398976    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.398980    9122 round_trippers.go:580]     Audit-Id: fd02bf37-a7df-4c97-8efe-d3b50984e148
	I1025 20:43:53.399749    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:53.457302    9122 request.go:614] Waited for 57.096509ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.457364    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.457387    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.457395    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.457402    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.460682    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.460697    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.460703    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.460708    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.460713    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.460718    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.460726    9122 round_trippers.go:580]     Audit-Id: a8a8d202-3f30-4f93-b61a-c231ed5569e7
	I1025 20:43:53.460734    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.460916    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:53.961492    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:53.961514    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.961527    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.961537    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.965232    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:53.965245    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.965252    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.965259    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.965267    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.965274    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.965280    9122 round_trippers.go:580]     Audit-Id: 41eb336b-b11f-4509-a1ab-5198dca2b4b5
	I1025 20:43:53.965286    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.965362    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:53.965693    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:53.965699    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:53.965705    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:53.965710    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:53.967762    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:53.967771    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:53.967776    9122 round_trippers.go:580]     Audit-Id: ecb78efb-7be8-49c8-82f4-2cc96c45dcb2
	I1025 20:43:53.967781    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:53.967786    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:53.967790    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:53.967795    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:53.967800    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:53 GMT
	I1025 20:43:53.967847    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:54.461524    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:54.461536    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.461543    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.461548    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.463495    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:54.463510    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.463516    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.463521    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.463527    9122 round_trippers.go:580]     Audit-Id: 0a310735-d98a-4186-9a26-45f55b2f3f03
	I1025 20:43:54.463544    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.463553    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.463558    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.463793    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:54.464086    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:54.464092    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.464098    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.464103    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.466055    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:54.466064    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.466069    9122 round_trippers.go:580]     Audit-Id: 2f518151-1bb6-42e0-b71e-97ee41a0aaca
	I1025 20:43:54.466074    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.466079    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.466084    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.466089    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.466096    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.466303    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:54.961378    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:54.961394    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.961403    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.961410    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.964219    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:54.964230    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.964236    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.964245    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.964250    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.964258    9122 round_trippers.go:580]     Audit-Id: f0ceec54-c4ca-46fb-8618-70b87a51e52c
	I1025 20:43:54.964263    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.964267    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.964324    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:54.964612    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:54.964619    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:54.964624    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:54.964629    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:54.966386    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:54.966395    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:54.966400    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:54.966405    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:54.966410    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:54.966415    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:54 GMT
	I1025 20:43:54.966419    9122 round_trippers.go:580]     Audit-Id: ede5a4a8-73fd-4d37-8afb-bc40e3d9dcfe
	I1025 20:43:54.966424    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:54.966465    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:55.461365    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:55.461379    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.461389    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.461395    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.463717    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:55.463726    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.463731    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.463736    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.463740    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.463745    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.463750    9122 round_trippers.go:580]     Audit-Id: 35a37718-c105-4cb2-aa8f-a8f9721d26d5
	I1025 20:43:55.463757    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.463814    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:55.464095    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:55.464101    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.464107    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.464112    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.466041    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:55.466052    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.466066    9122 round_trippers.go:580]     Audit-Id: 9fb7cdbb-3f15-46d4-b54c-befdda4b3b6f
	I1025 20:43:55.466077    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.466083    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.466088    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.466093    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.466101    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.466145    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:55.466334    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:43:55.961336    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:55.961360    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.961372    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.961384    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.965044    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:55.965071    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.965084    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.965095    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.965102    9122 round_trippers.go:580]     Audit-Id: 87b19661-d1c5-480b-b477-f744f86d0038
	I1025 20:43:55.965145    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.965162    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.965169    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.965491    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:55.965784    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:55.965790    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:55.965796    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:55.965801    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:55.967775    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:55.967784    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:55.967789    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:55.967794    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:55.967799    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:55 GMT
	I1025 20:43:55.967804    9122 round_trippers.go:580]     Audit-Id: c498106a-b32b-4e93-a5ad-f5b83d705344
	I1025 20:43:55.967808    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:55.967812    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:55.967955    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:56.461835    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:56.461849    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.461858    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.461864    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.464548    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:56.464557    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.464563    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.464568    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.464573    9122 round_trippers.go:580]     Audit-Id: 964a3895-50e7-4eff-b652-d06f37a9ce6c
	I1025 20:43:56.464578    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.464582    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.464587    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.464656    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:56.464935    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:56.464941    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.464946    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.464952    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.466842    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:56.466861    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.466869    9122 round_trippers.go:580]     Audit-Id: 029e1131-2571-4ed8-b933-39c6cbebeae5
	I1025 20:43:56.466881    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.466887    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.466893    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.466901    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.466907    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.466955    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:56.962237    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:56.962252    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.962261    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.962268    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.964923    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:56.964933    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.964939    9122 round_trippers.go:580]     Audit-Id: 51c35e46-86d7-455e-b050-2f053483af87
	I1025 20:43:56.964943    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.964948    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.964953    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.964958    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.964962    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.965018    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:56.965316    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:56.965322    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:56.965328    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:56.965347    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:56.967413    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:56.967422    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:56.967428    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:56.967436    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:56 GMT
	I1025 20:43:56.967462    9122 round_trippers.go:580]     Audit-Id: 1689ee5e-f470-4fdf-9fc4-1dc1c286ddd1
	I1025 20:43:56.967473    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:56.967478    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:56.967483    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:56.967561    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:57.463260    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:57.463281    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.463293    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.463303    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.466861    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:57.466883    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.466892    9122 round_trippers.go:580]     Audit-Id: b85859cb-e30d-4ea7-a0fc-eb2d3d867a19
	I1025 20:43:57.466900    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.466906    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.466913    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.466919    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.466926    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.467027    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:57.467414    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:57.467420    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.467426    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.467431    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.469746    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:57.469756    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.469761    9122 round_trippers.go:580]     Audit-Id: e8ffdc76-b96d-4721-9503-c93fc4b988f0
	I1025 20:43:57.469766    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.469771    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.469776    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.469781    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.469786    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.469945    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:57.470129    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:43:57.961601    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:57.961612    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.961619    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.961624    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.964142    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:57.964152    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.964160    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.964165    9122 round_trippers.go:580]     Audit-Id: b9ab7209-0573-4b04-abab-e473d11b4cf1
	I1025 20:43:57.964181    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.964190    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.964195    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.964217    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.964288    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:57.964583    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:57.964590    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:57.964596    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:57.964603    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:57.966543    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:57.966552    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:57.966557    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:57.966562    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:57.966567    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:57.966571    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:57 GMT
	I1025 20:43:57.966576    9122 round_trippers.go:580]     Audit-Id: 9829fced-039d-43c2-b2ea-ab699f805e0e
	I1025 20:43:57.966581    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:57.966619    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:58.461462    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:58.461482    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.461494    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.461504    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.464821    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:58.464834    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.464846    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.464852    9122 round_trippers.go:580]     Audit-Id: 249f9fc7-a9e3-40c0-84c6-6dc2a96b282c
	I1025 20:43:58.464856    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.464861    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.464866    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.464873    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.464928    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:58.465214    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:58.465221    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.465231    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.465241    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.467149    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:58.467157    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.467162    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.467167    9122 round_trippers.go:580]     Audit-Id: aad96f15-1792-4205-ba82-af459c86c8fe
	I1025 20:43:58.467172    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.467178    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.467186    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.467191    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.467236    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:58.961305    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:58.961317    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.961324    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.961329    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.964070    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:58.964080    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.964085    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.964089    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.964094    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.964098    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.964103    9122 round_trippers.go:580]     Audit-Id: 336f7c59-7d06-4863-98a4-f811b2e8df4c
	I1025 20:43:58.964108    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.964191    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:58.964467    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:58.964473    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:58.964479    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:58.964485    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:58.966449    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:43:58.966457    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:58.966463    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:58.966469    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:58.966473    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:58.966478    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:58.966483    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:58 GMT
	I1025 20:43:58.966488    9122 round_trippers.go:580]     Audit-Id: 8ed1dc65-c0b2-4440-b163-34412c6e174b
	I1025 20:43:58.966526    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:59.462970    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:59.462991    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.463003    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.463014    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.466901    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:43:59.466916    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.466924    9122 round_trippers.go:580]     Audit-Id: 66b08f05-35e9-48ae-9313-bb27d4e2ad23
	I1025 20:43:59.466930    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.466936    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.466943    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.466949    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.466956    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.467052    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:59.467429    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:59.467437    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.467445    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.467462    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.469701    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:59.469711    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.469716    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.469720    9122 round_trippers.go:580]     Audit-Id: b1eeaf14-ed27-44ca-a8cf-316c24d04607
	I1025 20:43:59.469727    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.469732    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.469738    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.469742    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.469955    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:43:59.470139    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:43:59.961810    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:43:59.961835    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.961847    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.961856    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.965965    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:43:59.965979    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.965987    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.966005    9122 round_trippers.go:580]     Audit-Id: 54207af3-5f85-4ce7-964d-a9c85e5a970e
	I1025 20:43:59.966017    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.966024    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.966037    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.966047    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.966128    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:43:59.966412    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:43:59.966421    9122 round_trippers.go:469] Request Headers:
	I1025 20:43:59.966427    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:43:59.966432    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:43:59.968741    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:43:59.968754    9122 round_trippers.go:577] Response Headers:
	I1025 20:43:59.968759    9122 round_trippers.go:580]     Audit-Id: 7db5888e-9633-4963-94fd-197911bb08a7
	I1025 20:43:59.968768    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:43:59.968774    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:43:59.968778    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:43:59.968783    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:43:59.968787    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:43:59 GMT
	I1025 20:43:59.968841    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:00.461285    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:00.461299    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.461307    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.461314    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.464136    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:00.464147    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.464153    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.464158    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.464163    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.464169    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.464174    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.464179    9122 round_trippers.go:580]     Audit-Id: 991b99c8-4fe3-45ad-b46a-2a2389711711
	I1025 20:44:00.464232    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:00.464513    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:00.464519    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.464527    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.464532    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.466385    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:00.466394    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.466399    9122 round_trippers.go:580]     Audit-Id: a15a2656-0fd2-45d9-bdff-abbae2596997
	I1025 20:44:00.466405    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.466409    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.466414    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.466419    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.466424    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.466465    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:00.961580    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:00.961604    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.961616    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.961626    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.965765    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:00.965780    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.965790    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.965805    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.965822    9122 round_trippers.go:580]     Audit-Id: 4b9d442f-8fcb-458a-90ab-3b21d2f26e7c
	I1025 20:44:00.965835    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.965844    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.965855    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.965968    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:00.966370    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:00.966377    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:00.966383    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:00.966388    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:00.968187    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:00.968196    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:00.968202    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:00 GMT
	I1025 20:44:00.968208    9122 round_trippers.go:580]     Audit-Id: bcf49b7b-a200-4cc0-aa3e-253d15d5b05e
	I1025 20:44:00.968212    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:00.968217    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:00.968222    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:00.968227    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:00.968273    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:01.461333    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:01.461352    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.461363    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.461373    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.464924    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:01.464936    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.464942    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.464946    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.464950    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.464956    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.464962    9122 round_trippers.go:580]     Audit-Id: a23e61bf-936f-484e-96ff-9428299b51b3
	I1025 20:44:01.464967    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.465027    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:01.465313    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:01.465319    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.465324    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.465330    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.467145    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:01.467155    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.467160    9122 round_trippers.go:580]     Audit-Id: c8039d72-4573-4ac7-8e7d-0f64758c8379
	I1025 20:44:01.467166    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.467171    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.467177    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.467182    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.467186    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.467227    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:01.963442    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:01.963463    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.963476    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.963486    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.967329    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:01.967345    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.967354    9122 round_trippers.go:580]     Audit-Id: 13167eca-388f-4409-9d7e-42e52b15e595
	I1025 20:44:01.967363    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.967374    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.967380    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.967387    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.967396    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.967502    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:01.967849    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:01.967855    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:01.967861    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:01.967866    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:01.969482    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:01.969494    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:01.969505    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:01 GMT
	I1025 20:44:01.969510    9122 round_trippers.go:580]     Audit-Id: 6d302772-7b14-4ad1-8f78-94e5218e789a
	I1025 20:44:01.969515    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:01.969520    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:01.969525    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:01.969529    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:01.969572    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:01.969758    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:44:02.461937    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:02.461957    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.461970    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.461980    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.465874    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:02.465889    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.465898    9122 round_trippers.go:580]     Audit-Id: 86b59cd3-9b42-402c-8a5b-ece04ddf7125
	I1025 20:44:02.465904    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.465911    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.465917    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.465924    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.465930    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.466006    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:02.466371    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:02.466382    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.466390    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.466397    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.468592    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:02.468601    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.468607    9122 round_trippers.go:580]     Audit-Id: fc29c905-e689-4edf-8bf7-3f2c81ffc63b
	I1025 20:44:02.468612    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.468622    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.468627    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.468631    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.468636    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.468679    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:02.962196    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:02.962219    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.962231    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.962241    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.965788    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:02.965837    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.965858    9122 round_trippers.go:580]     Audit-Id: 006f8565-7ed0-486e-9ed7-1fd5455319c1
	I1025 20:44:02.965873    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.965881    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.965886    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.965890    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.965895    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.965953    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:02.966229    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:02.966236    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:02.966242    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:02.966247    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:02.967937    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:02.967946    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:02.967952    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:02.967957    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:02 GMT
	I1025 20:44:02.967961    9122 round_trippers.go:580]     Audit-Id: 4face073-3799-4d8c-9a0b-6fabe078c341
	I1025 20:44:02.967966    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:02.967971    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:02.967975    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:02.968013    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:03.462838    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:03.462857    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.462869    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.462878    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.466393    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:03.466407    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.466415    9122 round_trippers.go:580]     Audit-Id: fde3f4f0-6128-472d-a448-fc0221434fe2
	I1025 20:44:03.466421    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.466428    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.466434    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.466441    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.466447    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.466524    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:03.466887    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:03.466895    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.466903    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.466910    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.468927    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:03.468937    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.468942    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.468947    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.468952    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.468956    9122 round_trippers.go:580]     Audit-Id: 25632417-1aef-41a0-8d2c-6ada72d5c0b9
	I1025 20:44:03.468961    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.468966    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.469316    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:03.963038    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:03.963059    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.963073    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.963083    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.966750    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:03.966761    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.966767    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.966773    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.966780    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.966786    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.966793    9122 round_trippers.go:580]     Audit-Id: cbd132f3-e41f-4599-ab6d-735d939b1f7b
	I1025 20:44:03.966801    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.966925    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:03.967208    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:03.967214    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:03.967220    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:03.967226    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:03.968942    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:03.968952    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:03.968958    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:03.968965    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:03.968971    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:03.968975    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:03.968980    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:03 GMT
	I1025 20:44:03.968984    9122 round_trippers.go:580]     Audit-Id: 900bf0a6-9fd6-42ad-8b1f-6e43686107e5
	I1025 20:44:03.969324    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:04.461415    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:04.461437    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.461449    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.461460    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.465110    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:04.465125    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.465132    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.465138    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.465145    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.465151    9122 round_trippers.go:580]     Audit-Id: d3788f9d-8d66-4c07-865d-de92c8b3635f
	I1025 20:44:04.465158    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.465164    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.465253    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:04.465638    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:04.465644    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.465650    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.465655    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.467348    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:04.467356    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.467362    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.467366    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.467371    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.467376    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.467380    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.467385    9122 round_trippers.go:580]     Audit-Id: ae9eba9c-20c2-4171-a1be-2e2d6e8e5d72
	I1025 20:44:04.467751    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:04.467950    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:44:04.963282    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:04.963302    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.963315    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.963325    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.967113    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:04.967129    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.967136    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.967142    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.967150    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.967156    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.967162    9122 round_trippers.go:580]     Audit-Id: d008f2d1-7681-4294-a001-aa8c186d4667
	I1025 20:44:04.967168    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.967235    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:04.967608    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:04.967616    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:04.967624    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:04.967631    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:04.969472    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:04.969482    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:04.969487    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:04 GMT
	I1025 20:44:04.969493    9122 round_trippers.go:580]     Audit-Id: 703866cb-a18e-4a21-903c-d55f5944f9a0
	I1025 20:44:04.969497    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:04.969502    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:04.969506    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:04.969512    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:04.969692    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:05.461384    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:05.461404    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.461416    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.461426    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.465122    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:05.465135    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.465142    9122 round_trippers.go:580]     Audit-Id: 8955308e-66f0-4870-b6c5-5973365cf0c7
	I1025 20:44:05.465148    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.465155    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.465164    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.465171    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.465178    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.465316    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:05.465586    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:05.465592    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.465598    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.465603    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.467497    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:05.467507    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.467512    9122 round_trippers.go:580]     Audit-Id: 01184904-aba8-45fa-88f9-442e36d07d09
	I1025 20:44:05.467517    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.467522    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.467527    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.467531    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.467535    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.467573    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:05.961745    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:05.961769    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.961781    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.961792    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.966293    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:05.966308    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.966316    9122 round_trippers.go:580]     Audit-Id: 206f4229-6b93-4a29-b883-577cca3247f0
	I1025 20:44:05.966324    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.966332    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.966338    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.966345    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.966351    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.966456    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:05.966758    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:05.966765    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:05.966771    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:05.966776    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:05.968770    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:05.968777    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:05.968782    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:05.968787    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:05.968792    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:05.968797    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:05.968801    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:05 GMT
	I1025 20:44:05.968806    9122 round_trippers.go:580]     Audit-Id: 1e4bdbf6-359e-4202-bc6d-834ac2a87612
	I1025 20:44:05.968841    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.463166    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:06.463187    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.463200    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.463210    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.467743    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:06.467761    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.467770    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.467779    9122 round_trippers.go:580]     Audit-Id: 3a140dfc-bcd1-410d-a4be-bb895b7b9a1d
	I1025 20:44:06.467787    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.467802    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.467828    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.467834    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.467909    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"963","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8264 chars]
	I1025 20:44:06.468199    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.468206    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.468212    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.468217    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.470127    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.470139    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.470145    9122 round_trippers.go:580]     Audit-Id: cfd0edee-6370-41cb-abd4-deff4cced744
	I1025 20:44:06.470149    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.470154    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.470161    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.470165    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.470170    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.470311    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.470497    9122 pod_ready.go:102] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"False"
	I1025 20:44:06.961965    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:06.961985    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.961997    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.962006    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.965901    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:06.965916    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.965924    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.965930    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.965936    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.965943    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.965949    9122 round_trippers.go:580]     Audit-Id: c786cc2d-70d0-40cf-87fd-35437d0a5d15
	I1025 20:44:06.965956    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.966065    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"1060","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8003 chars]
	I1025 20:44:06.966454    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.966478    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.966484    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.966489    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.968277    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.968286    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.968292    9122 round_trippers.go:580]     Audit-Id: ea145754-1556-408b-93cf-533da6a0c5bc
	I1025 20:44:06.968299    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.968307    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.968314    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.968318    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.968338    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.968573    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.968770    9122 pod_ready.go:92] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.968782    9122 pod_ready.go:81] duration metric: took 13.572479873s waiting for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.968795    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.968827    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-48p2l
	I1025 20:44:06.968831    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.968837    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.968842    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.970764    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.970773    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.970779    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.970784    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.970789    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.970793    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.970799    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.970804    9122 round_trippers.go:580]     Audit-Id: 193a8eff-4bfe-4043-b64e-a3a6419ef31f
	I1025 20:44:06.970849    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-48p2l","generateName":"kube-proxy-","namespace":"kube-system","uid":"cf96a572-bbca-4af2-bd3e-7d377772cef4","resourceVersion":"1004","creationTimestamp":"2022-10-26T03:38:58Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1025 20:44:06.971079    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.971084    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.971090    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.971095    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.972844    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.972854    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.972861    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.972866    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.972871    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.972876    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.972880    9122 round_trippers.go:580]     Audit-Id: 6ce688f2-525e-4ef0-a5d9-770b52d70799
	I1025 20:44:06.972886    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.972937    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.973109    9122 pod_ready.go:92] pod "kube-proxy-48p2l" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.973116    9122 pod_ready.go:81] duration metric: took 4.315082ms waiting for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.973121    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.973149    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-9j45q
	I1025 20:44:06.973153    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.973158    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.973164    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.974917    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.974926    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.974931    9122 round_trippers.go:580]     Audit-Id: 11675016-fe8d-4bbf-8c1a-34a9a05cadef
	I1025 20:44:06.974936    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.974941    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.974946    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.974951    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.974955    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.975021    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9j45q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3494f97-7b4b-4072-83ad-9a8308ed6c9b","resourceVersion":"922","creationTimestamp":"2022-10-26T03:40:04Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1025 20:44:06.975259    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m03
	I1025 20:44:06.975265    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.975271    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.975276    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.976674    9122 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1025 20:44:06.976682    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.976687    9122 round_trippers.go:580]     Audit-Id: 4833800c-c6d2-4b0a-a530-2a6ceeffee0b
	I1025 20:44:06.976692    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.976697    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.976702    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.976706    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.976711    9122 round_trippers.go:580]     Content-Length: 210
	I1025 20:44:06.976715    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.976724    9122 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-203818-m03\" not found","reason":"NotFound","details":{"name":"multinode-203818-m03","kind":"nodes"},"code":404}
	I1025 20:44:06.976819    9122 pod_ready.go:97] node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:06.976826    9122 pod_ready.go:81] duration metric: took 3.700964ms waiting for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	E1025 20:44:06.976831    9122 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:06.976836    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.976859    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-j799s
	I1025 20:44:06.976863    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.976868    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.976873    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.978358    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.978366    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.978371    9122 round_trippers.go:580]     Audit-Id: 4a677621-bea5-4950-a02b-1d57cf293fdf
	I1025 20:44:06.978375    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.978381    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.978385    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.978391    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.978395    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:06 GMT
	I1025 20:44:06.978437    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j799s","generateName":"kube-proxy-","namespace":"kube-system","uid":"281b0817-ab50-4c73-b20e-0774fcc2f594","resourceVersion":"840","creationTimestamp":"2022-10-26T03:39:21Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:39:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1025 20:44:06.978664    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:06.978670    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.978676    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.978681    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.980460    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.980469    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.980475    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.980484    9122 round_trippers.go:580]     Audit-Id: efe4cdac-9115-4574-a5a1-499a843dfa63
	I1025 20:44:06.980489    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.980493    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.980499    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.980504    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.980541    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818-m02","uid":"7c7037c9-edec-40ae-94ec-6fc8e2997faa","resourceVersion":"854","creationTimestamp":"2022-10-26T03:42:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4537 chars]
	I1025 20:44:06.980705    9122 pod_ready.go:92] pod "kube-proxy-j799s" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.980711    9122 pod_ready.go:81] duration metric: took 3.871169ms waiting for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.980717    9122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.980764    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-203818
	I1025 20:44:06.980768    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.980773    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.980779    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.982773    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.982782    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.982788    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.982793    9122 round_trippers.go:580]     Audit-Id: 33cd74c5-82df-42cc-b7bf-de8de4cd7bbb
	I1025 20:44:06.982798    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.982803    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.982807    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.982812    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.982863    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-203818","namespace":"kube-system","uid":"352db6de-72fe-4aaa-b7b7-79881ea11d8e","resourceVersion":"1029","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"134975eec8557874af571021bafa86c4","kubernetes.io/config.mirror":"134975eec8557874af571021bafa86c4","kubernetes.io/config.seen":"2022-10-26T03:38:46.168181423Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1025 20:44:06.983059    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:06.983065    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.983071    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.983077    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.984731    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:06.984740    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.984745    9122 round_trippers.go:580]     Audit-Id: 8e11e625-e7de-42b0-9b3f-7810ffc92c85
	I1025 20:44:06.984750    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.984755    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.984759    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.984765    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.984769    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.984812    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:06.984998    9122 pod_ready.go:92] pod "kube-scheduler-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:06.985003    9122 pod_ready.go:81] duration metric: took 4.282774ms waiting for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:06.985009    9122 pod_ready.go:38] duration metric: took 13.613826595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:44:06.985018    9122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 20:44:06.992881    9122 command_runner.go:130] > -16
	I1025 20:44:06.993003    9122 ops.go:34] apiserver oom_adj: -16
	I1025 20:44:06.993010    9122 kubeadm.go:631] restartCluster took 25.079628184s
	I1025 20:44:06.993017    9122 kubeadm.go:398] StartCluster complete in 25.108328524s
	I1025 20:44:06.993029    9122 settings.go:142] acquiring lock: {Name:mk8a865dc85ed559178cd0a5f8f4fdd48ae81a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:44:06.993100    9122 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:06.993487    9122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/kubeconfig: {Name:mke147bd0f9c02680989e4cfb1c572f71a0430b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:44:06.993952    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:06.994118    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:44:06.994332    9122 round_trippers.go:463] GET https://127.0.0.1:51345/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 20:44:06.994338    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:06.994343    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:06.994349    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:06.996382    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:06.996391    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:06.996396    9122 round_trippers.go:580]     Content-Length: 292
	I1025 20:44:06.996401    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:06.996406    9122 round_trippers.go:580]     Audit-Id: 5c46012d-92c6-4b2b-b5de-464b29383ac5
	I1025 20:44:06.996410    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:06.996415    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:06.996420    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:06.996424    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:06.996435    9122 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"47d25851-4c75-45e2-a9b2-efff685984f8","resourceVersion":"1045","creationTimestamp":"2022-10-26T03:38:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 20:44:06.996505    9122 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-203818" rescaled to 1
	I1025 20:44:06.996536    9122 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 20:44:06.996547    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 20:44:06.996572    9122 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1025 20:44:06.996689    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:07.037601    9122 out.go:177] * Verifying Kubernetes components...
	I1025 20:44:07.037661    9122 addons.go:65] Setting storage-provisioner=true in profile "multinode-203818"
	I1025 20:44:07.037669    9122 addons.go:65] Setting default-storageclass=true in profile "multinode-203818"
	I1025 20:44:07.058856    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:44:07.058860    9122 addons.go:153] Setting addon storage-provisioner=true in "multinode-203818"
	I1025 20:44:07.055512    9122 command_runner.go:130] > apiVersion: v1
	W1025 20:44:07.058873    9122 addons.go:162] addon storage-provisioner should already be in state true
	I1025 20:44:07.058866    9122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-203818"
	I1025 20:44:07.058897    9122 command_runner.go:130] > data:
	I1025 20:44:07.058904    9122 command_runner.go:130] >   Corefile: |
	I1025 20:44:07.058907    9122 command_runner.go:130] >     .:53 {
	I1025 20:44:07.058912    9122 command_runner.go:130] >         errors
	I1025 20:44:07.058922    9122 command_runner.go:130] >         health {
	I1025 20:44:07.058934    9122 command_runner.go:130] >            lameduck 5s
	I1025 20:44:07.058937    9122 command_runner.go:130] >         }
	I1025 20:44:07.058941    9122 command_runner.go:130] >         ready
	I1025 20:44:07.058948    9122 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 20:44:07.058948    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:07.058952    9122 command_runner.go:130] >            pods insecure
	I1025 20:44:07.058963    9122 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 20:44:07.058972    9122 command_runner.go:130] >            ttl 30
	I1025 20:44:07.058977    9122 command_runner.go:130] >         }
	I1025 20:44:07.058984    9122 command_runner.go:130] >         prometheus :9153
	I1025 20:44:07.058989    9122 command_runner.go:130] >         hosts {
	I1025 20:44:07.058993    9122 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I1025 20:44:07.058998    9122 command_runner.go:130] >            fallthrough
	I1025 20:44:07.059001    9122 command_runner.go:130] >         }
	I1025 20:44:07.059006    9122 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 20:44:07.059010    9122 command_runner.go:130] >            max_concurrent 1000
	I1025 20:44:07.059013    9122 command_runner.go:130] >         }
	I1025 20:44:07.059024    9122 command_runner.go:130] >         cache 30
	I1025 20:44:07.059027    9122 command_runner.go:130] >         loop
	I1025 20:44:07.059036    9122 command_runner.go:130] >         reload
	I1025 20:44:07.059039    9122 command_runner.go:130] >         loadbalance
	I1025 20:44:07.059043    9122 command_runner.go:130] >     }
	I1025 20:44:07.059046    9122 command_runner.go:130] > kind: ConfigMap
	I1025 20:44:07.059049    9122 command_runner.go:130] > metadata:
	I1025 20:44:07.059053    9122 command_runner.go:130] >   creationTimestamp: "2022-10-26T03:38:46Z"
	I1025 20:44:07.059056    9122 command_runner.go:130] >   name: coredns
	I1025 20:44:07.059060    9122 command_runner.go:130] >   namespace: kube-system
	I1025 20:44:07.059064    9122 command_runner.go:130] >   resourceVersion: "373"
	I1025 20:44:07.059070    9122 command_runner.go:130] >   uid: 537c1da9-7d52-4ec3-a656-99d3d1685483
	I1025 20:44:07.059150    9122 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 20:44:07.059196    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:44:07.059309    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:44:07.069777    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:07.130701    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:07.151236    9122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 20:44:07.151609    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:44:07.172099    9122 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 20:44:07.172122    9122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 20:44:07.172214    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:07.173038    9122 round_trippers.go:463] GET https://127.0.0.1:51345/apis/storage.k8s.io/v1/storageclasses
	I1025 20:44:07.173192    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.173249    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.173271    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.177352    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:07.177367    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.177373    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.177377    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.177396    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.177402    9122 round_trippers.go:580]     Content-Length: 1274
	I1025 20:44:07.177407    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.177412    9122 round_trippers.go:580]     Audit-Id: d89b0661-6f2c-45c2-9fc8-38cc7566449a
	I1025 20:44:07.177417    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.177474    9122 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"standard","uid":"3755b1f0-0744-497d-9808-f887a0391448","resourceVersion":"382","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I1025 20:44:07.179150    9122 node_ready.go:35] waiting up to 6m0s for node "multinode-203818" to be "Ready" ...
	I1025 20:44:07.179213    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:07.179218    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.179224    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.179232    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.179234    9122 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3755b1f0-0744-497d-9808-f887a0391448","resourceVersion":"382","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 20:44:07.179272    9122 round_trippers.go:463] PUT https://127.0.0.1:51345/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 20:44:07.179278    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.179284    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.179292    9122 round_trippers.go:473]     Content-Type: application/json
	I1025 20:44:07.179300    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.181672    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:07.181684    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.181689    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.181694    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.181698    9122 round_trippers.go:580]     Audit-Id: d224ba65-c794-4398-8ac3-629ac25a051d
	I1025 20:44:07.181704    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.181709    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.181714    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.181805    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:07.182026    9122 node_ready.go:49] node "multinode-203818" has status "Ready":"True"
	I1025 20:44:07.182033    9122 node_ready.go:38] duration metric: took 2.8645ms waiting for node "multinode-203818" to be "Ready" ...
	I1025 20:44:07.182040    9122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:44:07.182753    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:07.182763    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.182769    9122 round_trippers.go:580]     Audit-Id: d0854b58-9b99-4067-8510-a900b6a8d0d0
	I1025 20:44:07.182773    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.182779    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.182783    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.182788    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.182793    9122 round_trippers.go:580]     Content-Length: 1220
	I1025 20:44:07.182798    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.182814    9122 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3755b1f0-0744-497d-9808-f887a0391448","resourceVersion":"382","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 20:44:07.182874    9122 addons.go:153] Setting addon default-storageclass=true in "multinode-203818"
	W1025 20:44:07.182882    9122 addons.go:162] addon default-storageclass should already be in state true
	I1025 20:44:07.182896    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:07.183190    9122 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:44:07.240259    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:07.246626    9122 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 20:44:07.246637    9122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 20:44:07.246708    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:07.309890    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:07.333229    9122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 20:44:07.362706    9122 request.go:614] Waited for 180.629775ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:07.362750    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:07.362755    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.362761    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.362768    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.366669    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:07.366682    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.366688    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.366692    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.366703    9122 round_trippers.go:580]     Audit-Id: 0e0a1e65-2e31-48d8-baf2-ed42a4bd3e8b
	I1025 20:44:07.366710    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.366714    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.366718    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.367948    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84959 chars]
	I1025 20:44:07.369937    9122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:07.400416    9122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 20:44:07.485959    9122 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1025 20:44:07.487687    9122 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1025 20:44:07.489063    9122 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 20:44:07.490820    9122 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 20:44:07.492465    9122 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1025 20:44:07.498462    9122 command_runner.go:130] > pod/storage-provisioner configured
	I1025 20:44:07.562086    9122 request.go:614] Waited for 192.095418ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/coredns-565d847f94-tvhv6
	I1025 20:44:07.562120    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/coredns-565d847f94-tvhv6
	I1025 20:44:07.562124    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.562130    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.562135    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.566435    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:07.566451    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.566456    9122 round_trippers.go:580]     Audit-Id: f140f3a5-df29-4e64-bd10-9e667a723d2b
	I1025 20:44:07.566462    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.566466    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.566471    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.566475    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.566480    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.566561    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6551 chars]
	I1025 20:44:07.620355    9122 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1025 20:44:07.648263    9122 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 20:44:07.691569    9122 addons.go:414] enableAddons completed in 694.998234ms
	I1025 20:44:07.762021    9122 request.go:614] Waited for 195.136053ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:07.762058    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:07.762065    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.762073    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.762084    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.764656    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:07.764669    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.764678    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.764686    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.764693    9122 round_trippers.go:580]     Audit-Id: e4c82b80-5d42-4fce-bd6a-3050cd08ff1c
	I1025 20:44:07.764700    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.764706    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.764718    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.764972    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:07.765194    9122 pod_ready.go:92] pod "coredns-565d847f94-tvhv6" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:07.765201    9122 pod_ready.go:81] duration metric: took 395.254043ms waiting for pod "coredns-565d847f94-tvhv6" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:07.765207    9122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:07.961986    9122 request.go:614] Waited for 196.741024ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/etcd-multinode-203818
	I1025 20:44:07.962138    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/etcd-multinode-203818
	I1025 20:44:07.962151    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:07.962163    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:07.962174    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:07.966307    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:07.966323    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:07.966330    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:07.966336    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:07.966345    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:07 GMT
	I1025 20:44:07.966352    9122 round_trippers.go:580]     Audit-Id: 51b91cac-b510-47b9-88b4-ca1c46e71e93
	I1025 20:44:07.966361    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:07.966369    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:07.966453    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-203818","namespace":"kube-system","uid":"49b2d2ea-40ad-40fa-bab3-93930d3e9d10","resourceVersion":"1058","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.mirror":"9e23e3127f731572e24f90a2ed68c5ef","kubernetes.io/config.seen":"2022-10-26T03:38:46.168169599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6046 chars]
	I1025 20:44:08.162808    9122 request.go:614] Waited for 196.025571ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.162905    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.162921    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.162933    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.162944    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.166817    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.166835    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.166851    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.166859    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.166865    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.166871    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.166878    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.166884    9122 round_trippers.go:580]     Audit-Id: 6c70b420-3766-4887-98bd-ab76e8a7723c
	I1025 20:44:08.167478    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:08.167685    9122 pod_ready.go:92] pod "etcd-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:08.167692    9122 pod_ready.go:81] duration metric: took 402.479342ms waiting for pod "etcd-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.167706    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.364018    9122 request.go:614] Waited for 196.222595ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-203818
	I1025 20:44:08.364077    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-203818
	I1025 20:44:08.364089    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.364107    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.364119    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.367813    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.367831    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.367841    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.367863    9122 round_trippers.go:580]     Audit-Id: 98b5633c-6893-45a7-9c41-806715c762aa
	I1025 20:44:08.367874    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.367880    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.367886    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.367912    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.368247    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-203818","namespace":"kube-system","uid":"e95d0701-3478-4373-8740-541b9481b83a","resourceVersion":"1056","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.mirror":"f6506a534121567265ef26f28d4105d5","kubernetes.io/config.seen":"2022-10-26T03:38:46.168180019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8428 chars]
	I1025 20:44:08.562157    9122 request.go:614] Waited for 193.608524ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.562299    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.562310    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.562321    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.562333    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.566288    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.566307    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.566316    9122 round_trippers.go:580]     Audit-Id: f3046232-4ada-4e5c-857d-bd735606aa80
	I1025 20:44:08.566323    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.566329    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.566335    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.566346    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.566354    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.566448    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:08.566752    9122 pod_ready.go:92] pod "kube-apiserver-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:08.566759    9122 pod_ready.go:81] duration metric: took 399.048922ms waiting for pod "kube-apiserver-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.566765    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.762597    9122 request.go:614] Waited for 195.717109ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:08.762666    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-203818
	I1025 20:44:08.762677    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.762691    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.762702    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.767351    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:08.767366    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.767373    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.767380    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.767387    9122 round_trippers.go:580]     Audit-Id: afa51a08-8d07-4613-ae98-b0f2c5f7c4b0
	I1025 20:44:08.767394    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.767402    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.767406    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.767591    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-203818","namespace":"kube-system","uid":"cade2617-19dd-49f7-940e-d92e7b847fb0","resourceVersion":"1060","creationTimestamp":"2022-10-26T03:38:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.mirror":"7cbc05e9939b8720a91889a29a2b891d","kubernetes.io/config.seen":"2022-10-26T03:38:35.304762758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8003 chars]
	I1025 20:44:08.963074    9122 request.go:614] Waited for 195.066372ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.963127    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:08.963135    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:08.963146    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:08.963159    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:08.966919    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:08.966935    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:08.966943    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:08.966950    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:08 GMT
	I1025 20:44:08.966958    9122 round_trippers.go:580]     Audit-Id: 2c709be6-e90f-4e55-8517-1d00601a24a4
	I1025 20:44:08.966965    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:08.966972    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:08.966979    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:08.967049    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:08.967332    9122 pod_ready.go:92] pod "kube-controller-manager-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:08.967339    9122 pod_ready.go:81] duration metric: took 400.568509ms waiting for pod "kube-controller-manager-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:08.967346    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.164022    9122 request.go:614] Waited for 196.623899ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-48p2l
	I1025 20:44:09.164137    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-48p2l
	I1025 20:44:09.164147    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.164159    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.164170    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.168358    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:09.168373    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.168380    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.168390    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.168397    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.168404    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.168410    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.168417    9122 round_trippers.go:580]     Audit-Id: 2624caff-b6fa-41b3-8bb1-cec9241d53a4
	I1025 20:44:09.168488    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-48p2l","generateName":"kube-proxy-","namespace":"kube-system","uid":"cf96a572-bbca-4af2-bd3e-7d377772cef4","resourceVersion":"1004","creationTimestamp":"2022-10-26T03:38:58Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1025 20:44:09.363840    9122 request.go:614] Waited for 195.001817ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:09.364013    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:09.364024    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.364035    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.364045    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.368021    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:09.368037    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.368045    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.368052    9122 round_trippers.go:580]     Audit-Id: 3cf4fa20-c988-4f3e-bb9a-5f5a6e5fb2b1
	I1025 20:44:09.368059    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.368070    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.368077    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.368083    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.368155    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:09.368414    9122 pod_ready.go:92] pod "kube-proxy-48p2l" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:09.368424    9122 pod_ready.go:81] duration metric: took 401.071604ms waiting for pod "kube-proxy-48p2l" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.368432    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.562008    9122 request.go:614] Waited for 193.52219ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-9j45q
	I1025 20:44:09.562057    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-9j45q
	I1025 20:44:09.562065    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.562077    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.562091    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.566140    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:09.566169    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.566176    9122 round_trippers.go:580]     Audit-Id: 276eab66-ab1b-4b1e-98e7-1e40f2ac558d
	I1025 20:44:09.566187    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.566192    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.566197    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.566202    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.566211    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.566313    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9j45q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3494f97-7b4b-4072-83ad-9a8308ed6c9b","resourceVersion":"922","creationTimestamp":"2022-10-26T03:40:04Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1025 20:44:09.761966    9122 request.go:614] Waited for 195.374246ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m03
	I1025 20:44:09.762046    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m03
	I1025 20:44:09.762051    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.762059    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.762066    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.764502    9122 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1025 20:44:09.764512    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.764517    9122 round_trippers.go:580]     Audit-Id: 28eea3d3-7b04-4e39-b277-8ff1f2a410ab
	I1025 20:44:09.764522    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.764527    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.764533    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.764539    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.764544    9122 round_trippers.go:580]     Content-Length: 210
	I1025 20:44:09.764548    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.764564    9122 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-203818-m03\" not found","reason":"NotFound","details":{"name":"multinode-203818-m03","kind":"nodes"},"code":404}
	I1025 20:44:09.764627    9122 pod_ready.go:97] node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:09.764636    9122 pod_ready.go:81] duration metric: took 396.199602ms waiting for pod "kube-proxy-9j45q" in "kube-system" namespace to be "Ready" ...
	E1025 20:44:09.764642    9122 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-203818-m03" hosting pod "kube-proxy-9j45q" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-203818-m03": nodes "multinode-203818-m03" not found
	I1025 20:44:09.764655    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:09.962736    9122 request.go:614] Waited for 197.986422ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-j799s
	I1025 20:44:09.962817    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-proxy-j799s
	I1025 20:44:09.962827    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:09.962841    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:09.962852    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:09.966803    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:09.966817    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:09.966824    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:09.966838    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:09.966845    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:09.966851    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:09 GMT
	I1025 20:44:09.966863    9122 round_trippers.go:580]     Audit-Id: 3c39cbc8-d1c0-4cee-b472-86989131f6cd
	I1025 20:44:09.966870    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:09.966941    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j799s","generateName":"kube-proxy-","namespace":"kube-system","uid":"281b0817-ab50-4c73-b20e-0774fcc2f594","resourceVersion":"840","creationTimestamp":"2022-10-26T03:39:21Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:39:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8eac475-1d8a-4ec8-9fa7-9487b9a9cf10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1025 20:44:10.164023    9122 request.go:614] Waited for 196.719446ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:10.164102    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:10.164110    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.164134    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.164146    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.168057    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.168072    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.168080    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.168086    9122 round_trippers.go:580]     Audit-Id: 59d79d8a-cccf-49bd-a9f1-674d1f3bb491
	I1025 20:44:10.168093    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.168100    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.168106    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.168113    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.168181    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818-m02","uid":"7c7037c9-edec-40ae-94ec-6fc8e2997faa","resourceVersion":"854","creationTimestamp":"2022-10-26T03:42:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:42:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4537 chars]
	I1025 20:44:10.168449    9122 pod_ready.go:92] pod "kube-proxy-j799s" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:10.168456    9122 pod_ready.go:81] duration metric: took 403.796794ms waiting for pod "kube-proxy-j799s" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:10.168465    9122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:10.364040    9122 request.go:614] Waited for 195.517365ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-203818
	I1025 20:44:10.364151    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-203818
	I1025 20:44:10.364161    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.364172    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.364183    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.367594    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.367616    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.367624    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.367631    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.367637    9122 round_trippers.go:580]     Audit-Id: a79ccc07-97bc-4b16-bab4-0a2fd9f37651
	I1025 20:44:10.367646    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.367652    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.367658    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.367782    9122 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-203818","namespace":"kube-system","uid":"352db6de-72fe-4aaa-b7b7-79881ea11d8e","resourceVersion":"1029","creationTimestamp":"2022-10-26T03:38:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"134975eec8557874af571021bafa86c4","kubernetes.io/config.mirror":"134975eec8557874af571021bafa86c4","kubernetes.io/config.seen":"2022-10-26T03:38:46.168181423Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1025 20:44:10.562867    9122 request.go:614] Waited for 194.761379ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:10.562952    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes/multinode-203818
	I1025 20:44:10.562972    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.562991    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.563010    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.566796    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.566811    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.566819    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.566825    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.566832    9122 round_trippers.go:580]     Audit-Id: f762d591-1906-418c-b2ee-9039b32f2b79
	I1025 20:44:10.566843    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.566850    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.566856    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.566926    9122 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-10-26T03:38:43Z","fieldsType":"FieldsV1","fi [truncated 5322 chars]
	I1025 20:44:10.567181    9122 pod_ready.go:92] pod "kube-scheduler-multinode-203818" in "kube-system" namespace has status "Ready":"True"
	I1025 20:44:10.567193    9122 pod_ready.go:81] duration metric: took 398.718424ms waiting for pod "kube-scheduler-multinode-203818" in "kube-system" namespace to be "Ready" ...
	I1025 20:44:10.567219    9122 pod_ready.go:38] duration metric: took 3.385153731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 20:44:10.567234    9122 api_server.go:51] waiting for apiserver process to appear ...
	I1025 20:44:10.567279    9122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:44:10.576356    9122 command_runner.go:130] > 1821
	I1025 20:44:10.576881    9122 api_server.go:71] duration metric: took 3.580329189s to wait for apiserver process to appear ...
	I1025 20:44:10.576901    9122 api_server.go:87] waiting for apiserver healthz status ...
	I1025 20:44:10.576909    9122 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51345/healthz ...
	I1025 20:44:10.587904    9122 api_server.go:278] https://127.0.0.1:51345/healthz returned 200:
	ok
	I1025 20:44:10.587979    9122 round_trippers.go:463] GET https://127.0.0.1:51345/version
	I1025 20:44:10.587986    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.587995    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.588003    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.589294    9122 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 20:44:10.589308    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.589320    9122 round_trippers.go:580]     Audit-Id: 52557332-ebc9-43b2-a25e-be48ba582fd6
	I1025 20:44:10.589328    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.589335    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.589342    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.589349    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.589355    9122 round_trippers.go:580]     Content-Length: 263
	I1025 20:44:10.589362    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.589385    9122 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 20:44:10.589428    9122 api_server.go:140] control plane version: v1.25.3
	I1025 20:44:10.589437    9122 api_server.go:130] duration metric: took 12.529993ms to wait for apiserver health ...
	I1025 20:44:10.589444    9122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 20:44:10.762106    9122 request.go:614] Waited for 172.611538ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:10.762239    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:10.762249    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.762263    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.762274    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.767195    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:10.767208    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.767214    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.767220    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.767226    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.767233    9122 round_trippers.go:580]     Audit-Id: 5462aff0-b123-4039-a710-894a41f9557d
	I1025 20:44:10.767240    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.767251    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.768603    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84959 chars]
	I1025 20:44:10.770508    9122 system_pods.go:59] 12 kube-system pods found
	I1025 20:44:10.770518    9122 system_pods.go:61] "coredns-565d847f94-tvhv6" [c89eabb7-66d0-469a-8966-ceeb6f9b215e] Running
	I1025 20:44:10.770522    9122 system_pods.go:61] "etcd-multinode-203818" [49b2d2ea-40ad-40fa-bab3-93930d3e9d10] Running
	I1025 20:44:10.770527    9122 system_pods.go:61] "kindnet-8xvrw" [a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07] Running
	I1025 20:44:10.770531    9122 system_pods.go:61] "kindnet-l9tx2" [0bc050f8-3916-4ad8-9eca-ec2de9c7c4d9] Running
	I1025 20:44:10.770534    9122 system_pods.go:61] "kindnet-q9qv5" [d5252527-eabb-4b78-9901-bfb15f51fc1b] Running
	I1025 20:44:10.770538    9122 system_pods.go:61] "kube-apiserver-multinode-203818" [e95d0701-3478-4373-8740-541b9481b83a] Running
	I1025 20:44:10.770542    9122 system_pods.go:61] "kube-controller-manager-multinode-203818" [cade2617-19dd-49f7-940e-d92e7b847fb0] Running
	I1025 20:44:10.770545    9122 system_pods.go:61] "kube-proxy-48p2l" [cf96a572-bbca-4af2-bd3e-7d377772cef4] Running
	I1025 20:44:10.770549    9122 system_pods.go:61] "kube-proxy-9j45q" [f3494f97-7b4b-4072-83ad-9a8308ed6c9b] Running
	I1025 20:44:10.770552    9122 system_pods.go:61] "kube-proxy-j799s" [281b0817-ab50-4c73-b20e-0774fcc2f594] Running
	I1025 20:44:10.770556    9122 system_pods.go:61] "kube-scheduler-multinode-203818" [352db6de-72fe-4aaa-b7b7-79881ea11d8e] Running
	I1025 20:44:10.770560    9122 system_pods.go:61] "storage-provisioner" [93c13130-1e73-4433-b82f-b565797df5c6] Running
	I1025 20:44:10.770564    9122 system_pods.go:74] duration metric: took 181.114965ms to wait for pod list to return data ...
	I1025 20:44:10.770569    9122 default_sa.go:34] waiting for default service account to be created ...
	I1025 20:44:10.962661    9122 request.go:614] Waited for 191.924378ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/default/serviceaccounts
	I1025 20:44:10.962711    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/default/serviceaccounts
	I1025 20:44:10.962723    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:10.962736    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:10.962748    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:10.966532    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:10.966547    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:10.966554    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:10.966560    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:10.966568    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:10.966575    9122 round_trippers.go:580]     Content-Length: 262
	I1025 20:44:10.966581    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:10 GMT
	I1025 20:44:10.966588    9122 round_trippers.go:580]     Audit-Id: 10317f59-21a4-4558-8d84-d304f235334f
	I1025 20:44:10.966594    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:10.966611    9122 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"03440f91-d2ed-423b-adea-19369256c600","resourceVersion":"314","creationTimestamp":"2022-10-26T03:38:58Z"}}]}
	I1025 20:44:10.966769    9122 default_sa.go:45] found service account: "default"
	I1025 20:44:10.966778    9122 default_sa.go:55] duration metric: took 196.204791ms for default service account to be created ...
	I1025 20:44:10.966784    9122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 20:44:11.164179    9122 request.go:614] Waited for 197.237952ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:11.164246    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/namespaces/kube-system/pods
	I1025 20:44:11.164255    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:11.164272    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:11.164284    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:11.169255    9122 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 20:44:11.169267    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:11.169272    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:11.169277    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:11 GMT
	I1025 20:44:11.169283    9122 round_trippers.go:580]     Audit-Id: 01e85a7b-96dc-41df-9787-3861240a51d5
	I1025 20:44:11.169289    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:11.169296    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:11.169301    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:11.170183    9122 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1061"},"items":[{"metadata":{"name":"coredns-565d847f94-tvhv6","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"c89eabb7-66d0-469a-8966-ceeb6f9b215e","resourceVersion":"1016","creationTimestamp":"2022-10-26T03:38:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"a8aa0f67-423d-4a20-9361-50afcccf88e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-26T03:38:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8aa0f67-423d-4a20-9361-50afcccf88e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84959 chars]
	I1025 20:44:11.172092    9122 system_pods.go:86] 12 kube-system pods found
	I1025 20:44:11.172102    9122 system_pods.go:89] "coredns-565d847f94-tvhv6" [c89eabb7-66d0-469a-8966-ceeb6f9b215e] Running
	I1025 20:44:11.172106    9122 system_pods.go:89] "etcd-multinode-203818" [49b2d2ea-40ad-40fa-bab3-93930d3e9d10] Running
	I1025 20:44:11.172110    9122 system_pods.go:89] "kindnet-8xvrw" [a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07] Running
	I1025 20:44:11.172115    9122 system_pods.go:89] "kindnet-l9tx2" [0bc050f8-3916-4ad8-9eca-ec2de9c7c4d9] Running
	I1025 20:44:11.172119    9122 system_pods.go:89] "kindnet-q9qv5" [d5252527-eabb-4b78-9901-bfb15f51fc1b] Running
	I1025 20:44:11.172122    9122 system_pods.go:89] "kube-apiserver-multinode-203818" [e95d0701-3478-4373-8740-541b9481b83a] Running
	I1025 20:44:11.172131    9122 system_pods.go:89] "kube-controller-manager-multinode-203818" [cade2617-19dd-49f7-940e-d92e7b847fb0] Running
	I1025 20:44:11.172135    9122 system_pods.go:89] "kube-proxy-48p2l" [cf96a572-bbca-4af2-bd3e-7d377772cef4] Running
	I1025 20:44:11.172138    9122 system_pods.go:89] "kube-proxy-9j45q" [f3494f97-7b4b-4072-83ad-9a8308ed6c9b] Running
	I1025 20:44:11.172144    9122 system_pods.go:89] "kube-proxy-j799s" [281b0817-ab50-4c73-b20e-0774fcc2f594] Running
	I1025 20:44:11.172148    9122 system_pods.go:89] "kube-scheduler-multinode-203818" [352db6de-72fe-4aaa-b7b7-79881ea11d8e] Running
	I1025 20:44:11.172151    9122 system_pods.go:89] "storage-provisioner" [93c13130-1e73-4433-b82f-b565797df5c6] Running
	I1025 20:44:11.172155    9122 system_pods.go:126] duration metric: took 205.367711ms to wait for k8s-apps to be running ...
	I1025 20:44:11.172160    9122 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 20:44:11.172222    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:44:11.181462    9122 system_svc.go:56] duration metric: took 9.295705ms WaitForService to wait for kubelet.
	I1025 20:44:11.181480    9122 kubeadm.go:573] duration metric: took 4.184928507s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 20:44:11.181496    9122 node_conditions.go:102] verifying NodePressure condition ...
	I1025 20:44:11.362010    9122 request.go:614] Waited for 180.448649ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51345/api/v1/nodes
	I1025 20:44:11.362038    9122 round_trippers.go:463] GET https://127.0.0.1:51345/api/v1/nodes
	I1025 20:44:11.362042    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:11.362048    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:11.362054    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:11.364620    9122 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 20:44:11.364629    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:11.364634    9122 round_trippers.go:580]     Audit-Id: 10b9264a-9a8a-4598-9808-eb4cc28e5be9
	I1025 20:44:11.364639    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:11.364646    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:11.364653    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:11.364659    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:11.364664    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:11 GMT
	I1025 20:44:11.364750    9122 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1061"},"items":[{"metadata":{"name":"multinode-203818","uid":"bdb7c9a1-a584-4581-a57f-2834d2a99bcf","resourceVersion":"961","creationTimestamp":"2022-10-26T03:38:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-203818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a202e21b7dfdf03a7523ceebf3573bc3065a5a1a","minikube.k8s.io/name":"multinode-203818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_10_25T20_38_46_0700","minikube.k8s.io/version":"v1.27.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 10905 chars]
	I1025 20:44:11.365082    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:44:11.365089    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:44:11.365097    9122 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 20:44:11.365101    9122 node_conditions.go:123] node cpu capacity is 6
	I1025 20:44:11.365105    9122 node_conditions.go:105] duration metric: took 183.6044ms to run NodePressure ...
	I1025 20:44:11.365111    9122 start.go:217] waiting for startup goroutines ...
	I1025 20:44:11.365800    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:11.365863    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:44:11.409835    9122 out.go:177] * Starting worker node multinode-203818-m02 in cluster multinode-203818
	I1025 20:44:11.431674    9122 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:44:11.453923    9122 out.go:177] * Pulling base image ...
	I1025 20:44:11.496714    9122 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:44:11.496753    9122 cache.go:57] Caching tarball of preloaded images
	I1025 20:44:11.496776    9122 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:44:11.496926    9122 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 20:44:11.496949    9122 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 20:44:11.497862    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:44:11.560883    9122 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:44:11.560896    9122 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:44:11.560905    9122 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:44:11.560992    9122 start.go:364] acquiring machines lock for multinode-203818-m02: {Name:mk1c2c2ef868528130aa99eb339d96e0521be812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:44:11.561056    9122 start.go:368] acquired machines lock for "multinode-203818-m02" in 52.175µs
	I1025 20:44:11.561070    9122 start.go:96] Skipping create...Using existing machine configuration
	I1025 20:44:11.561075    9122 fix.go:55] fixHost starting: m02
	I1025 20:44:11.561324    9122 cli_runner.go:164] Run: docker container inspect multinode-203818-m02 --format={{.State.Status}}
	I1025 20:44:11.625979    9122 fix.go:103] recreateIfNeeded on multinode-203818-m02: state=Stopped err=<nil>
	W1025 20:44:11.626000    9122 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 20:44:11.647764    9122 out.go:177] * Restarting existing docker container for "multinode-203818-m02" ...
	I1025 20:44:11.723412    9122 cli_runner.go:164] Run: docker start multinode-203818-m02
	I1025 20:44:12.059372    9122 cli_runner.go:164] Run: docker container inspect multinode-203818-m02 --format={{.State.Status}}
	I1025 20:44:12.124383    9122 kic.go:415] container "multinode-203818-m02" state is running.
	I1025 20:44:12.125031    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:44:12.194372    9122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/config.json ...
	I1025 20:44:12.194849    9122 machine.go:88] provisioning docker machine ...
	I1025 20:44:12.194880    9122 ubuntu.go:169] provisioning hostname "multinode-203818-m02"
	I1025 20:44:12.195019    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:12.271024    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:12.271206    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:12.271217    9122 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-203818-m02 && echo "multinode-203818-m02" | sudo tee /etc/hostname
	I1025 20:44:12.422715    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-203818-m02
	
	I1025 20:44:12.422808    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:12.488163    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:12.488370    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:12.488386    9122 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-203818-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-203818-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-203818-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 20:44:12.608786    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:44:12.608802    9122 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/14956-2080/.minikube CaCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/14956-2080/.minikube}
	I1025 20:44:12.608812    9122 ubuntu.go:177] setting up certificates
	I1025 20:44:12.608818    9122 provision.go:83] configureAuth start
	I1025 20:44:12.608882    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:44:12.677679    9122 provision.go:138] copyHostCerts
	I1025 20:44:12.677722    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:44:12.677763    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem, removing ...
	I1025 20:44:12.677768    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem
	I1025 20:44:12.677863    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.pem (1078 bytes)
	I1025 20:44:12.678028    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:44:12.678053    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem, removing ...
	I1025 20:44:12.678080    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem
	I1025 20:44:12.678142    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/cert.pem (1123 bytes)
	I1025 20:44:12.678261    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:44:12.678284    9122 exec_runner.go:144] found /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem, removing ...
	I1025 20:44:12.678289    9122 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem
	I1025 20:44:12.678345    9122 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/14956-2080/.minikube/key.pem (1679 bytes)
	I1025 20:44:12.678483    9122 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem org=jenkins.multinode-203818-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-203818-m02]
	I1025 20:44:12.759757    9122 provision.go:172] copyRemoteCerts
	I1025 20:44:12.759832    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 20:44:12.759878    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:12.827796    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:12.915334    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 20:44:12.915430    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 20:44:12.933018    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 20:44:12.933078    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1025 20:44:12.949721    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 20:44:12.949801    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 20:44:12.966236    9122 provision.go:86] duration metric: configureAuth took 357.409581ms
	I1025 20:44:12.966251    9122 ubuntu.go:193] setting minikube options for container-runtime
	I1025 20:44:12.966418    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:12.966470    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.029690    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:13.029886    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:13.029896    9122 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 20:44:13.154210    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 20:44:13.154223    9122 ubuntu.go:71] root file system type: overlay
	I1025 20:44:13.154940    9122 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 20:44:13.155079    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.219797    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:13.219931    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:13.219978    9122 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 20:44:13.349991    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 20:44:13.350077    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.415362    9122 main.go:134] libmachine: Using SSH client type: native
	I1025 20:44:13.415521    9122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6b40] 0x13e9cc0 <nil>  [] 0s} 127.0.0.1 51373 <nil> <nil>}
	I1025 20:44:13.415535    9122 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 20:44:13.542653    9122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 20:44:13.542668    9122 machine.go:91] provisioned docker machine in 1.347809776s
	I1025 20:44:13.542674    9122 start.go:300] post-start starting for "multinode-203818-m02" (driver="docker")
	I1025 20:44:13.542679    9122 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 20:44:13.542734    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 20:44:13.542793    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.606825    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:13.693211    9122 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 20:44:13.696366    9122 command_runner.go:130] > NAME="Ubuntu"
	I1025 20:44:13.696383    9122 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1025 20:44:13.696387    9122 command_runner.go:130] > ID=ubuntu
	I1025 20:44:13.696392    9122 command_runner.go:130] > ID_LIKE=debian
	I1025 20:44:13.696396    9122 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1025 20:44:13.696400    9122 command_runner.go:130] > VERSION_ID="20.04"
	I1025 20:44:13.696406    9122 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 20:44:13.696410    9122 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 20:44:13.696414    9122 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 20:44:13.696423    9122 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 20:44:13.696429    9122 command_runner.go:130] > VERSION_CODENAME=focal
	I1025 20:44:13.696432    9122 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1025 20:44:13.696477    9122 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 20:44:13.696487    9122 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 20:44:13.696500    9122 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 20:44:13.696504    9122 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 20:44:13.696519    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/addons for local assets ...
	I1025 20:44:13.696622    9122 filesync.go:126] Scanning /Users/jenkins/minikube-integration/14956-2080/.minikube/files for local assets ...
	I1025 20:44:13.696768    9122 filesync.go:149] local asset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> 29162.pem in /etc/ssl/certs
	I1025 20:44:13.696775    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /etc/ssl/certs/29162.pem
	I1025 20:44:13.696903    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 20:44:13.703981    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:44:13.720777    9122 start.go:303] post-start completed in 178.09464ms
	I1025 20:44:13.720858    9122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:44:13.720908    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:13.784893    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:13.869271    9122 command_runner.go:130] > 6%!
	(MISSING)I1025 20:44:13.869570    9122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:44:13.873397    9122 command_runner.go:130] > 92G
	I1025 20:44:13.873696    9122 fix.go:57] fixHost completed within 2.312617394s
	I1025 20:44:13.873705    9122 start.go:83] releasing machines lock for "multinode-203818-m02", held for 2.31264211s
	I1025 20:44:13.873768    9122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:44:13.957825    9122 out.go:177] * Found network options:
	I1025 20:44:13.979655    9122 out.go:177]   - NO_PROXY=192.168.58.2
	W1025 20:44:14.001593    9122 proxy.go:119] fail to check proxy env: Error ip not in block
	W1025 20:44:14.001645    9122 proxy.go:119] fail to check proxy env: Error ip not in block
	I1025 20:44:14.001845    9122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 20:44:14.001869    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 20:44:14.001967    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:14.001977    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:44:14.069310    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:14.069901    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51373 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:44:14.160131    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1025 20:44:14.173115    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:44:14.200970    9122 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 20:44:14.254199    9122 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 20:44:14.352590    9122 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 20:44:14.362985    9122 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 20:44:14.363525    9122 command_runner.go:130] > [Unit]
	I1025 20:44:14.363534    9122 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 20:44:14.363544    9122 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 20:44:14.363550    9122 command_runner.go:130] > BindsTo=containerd.service
	I1025 20:44:14.363555    9122 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 20:44:14.363559    9122 command_runner.go:130] > Wants=network-online.target
	I1025 20:44:14.363563    9122 command_runner.go:130] > Requires=docker.socket
	I1025 20:44:14.363568    9122 command_runner.go:130] > StartLimitBurst=3
	I1025 20:44:14.363571    9122 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 20:44:14.363574    9122 command_runner.go:130] > [Service]
	I1025 20:44:14.363578    9122 command_runner.go:130] > Type=notify
	I1025 20:44:14.363581    9122 command_runner.go:130] > Restart=on-failure
	I1025 20:44:14.363585    9122 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1025 20:44:14.363591    9122 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 20:44:14.363597    9122 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 20:44:14.363606    9122 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 20:44:14.363611    9122 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 20:44:14.363617    9122 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 20:44:14.363622    9122 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 20:44:14.363630    9122 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 20:44:14.363640    9122 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 20:44:14.363646    9122 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 20:44:14.363650    9122 command_runner.go:130] > ExecStart=
	I1025 20:44:14.363661    9122 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 20:44:14.363666    9122 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 20:44:14.363672    9122 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 20:44:14.363678    9122 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 20:44:14.363682    9122 command_runner.go:130] > LimitNOFILE=infinity
	I1025 20:44:14.363685    9122 command_runner.go:130] > LimitNPROC=infinity
	I1025 20:44:14.363689    9122 command_runner.go:130] > LimitCORE=infinity
	I1025 20:44:14.363694    9122 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 20:44:14.363699    9122 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 20:44:14.363702    9122 command_runner.go:130] > TasksMax=infinity
	I1025 20:44:14.363705    9122 command_runner.go:130] > TimeoutStartSec=0
	I1025 20:44:14.363711    9122 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 20:44:14.363714    9122 command_runner.go:130] > Delegate=yes
	I1025 20:44:14.363724    9122 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 20:44:14.363728    9122 command_runner.go:130] > KillMode=process
	I1025 20:44:14.363731    9122 command_runner.go:130] > [Install]
	I1025 20:44:14.363737    9122 command_runner.go:130] > WantedBy=multi-user.target
	I1025 20:44:14.363849    9122 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 20:44:14.363906    9122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 20:44:14.373079    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 20:44:14.384406    9122 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:44:14.384420    9122 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 20:44:14.385299    9122 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 20:44:14.449957    9122 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 20:44:14.515541    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:44:14.597736    9122 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 20:44:14.808775    9122 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 20:44:14.880558    9122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 20:44:14.950550    9122 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 20:44:14.959952    9122 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 20:44:14.960018    9122 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 20:44:14.964235    9122 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 20:44:14.964252    9122 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 20:44:14.964264    9122 command_runner.go:130] > Device: 100035h/1048629d	Inode: 130         Links: 1
	I1025 20:44:14.964273    9122 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 20:44:14.964284    9122 command_runner.go:130] > Access: 2022-10-26 03:44:14.911540563 +0000
	I1025 20:44:14.964291    9122 command_runner.go:130] > Modify: 2022-10-26 03:44:14.274540607 +0000
	I1025 20:44:14.964295    9122 command_runner.go:130] > Change: 2022-10-26 03:44:14.286540607 +0000
	I1025 20:44:14.964301    9122 command_runner.go:130] >  Birth: -
	I1025 20:44:14.964373    9122 start.go:472] Will wait 60s for crictl version
	I1025 20:44:14.964418    9122 ssh_runner.go:195] Run: sudo crictl version
	I1025 20:44:14.992780    9122 command_runner.go:130] > Version:  0.1.0
	I1025 20:44:14.992792    9122 command_runner.go:130] > RuntimeName:  docker
	I1025 20:44:14.992806    9122 command_runner.go:130] > RuntimeVersion:  20.10.18
	I1025 20:44:14.992815    9122 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1025 20:44:14.994850    9122 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 20:44:14.994921    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:44:15.019910    9122 command_runner.go:130] > 20.10.18
	I1025 20:44:15.022225    9122 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 20:44:15.046873    9122 command_runner.go:130] > 20.10.18
	I1025 20:44:15.074290    9122 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 20:44:15.115184    9122 out.go:177]   - env NO_PROXY=192.168.58.2
	I1025 20:44:15.136476    9122 cli_runner.go:164] Run: docker exec -t multinode-203818-m02 dig +short host.docker.internal
	I1025 20:44:15.255350    9122 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 20:44:15.255485    9122 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 20:44:15.259923    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:44:15.269340    9122 certs.go:54] Setting up /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818 for IP: 192.168.58.3
	I1025 20:44:15.269449    9122 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key
	I1025 20:44:15.269494    9122 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key
	I1025 20:44:15.269523    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 20:44:15.269550    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 20:44:15.269565    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 20:44:15.269581    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 20:44:15.269670    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem (1338 bytes)
	W1025 20:44:15.269707    9122 certs.go:384] ignoring /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916_empty.pem, impossibly tiny 0 bytes
	I1025 20:44:15.269731    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 20:44:15.269764    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem (1078 bytes)
	I1025 20:44:15.269794    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem (1123 bytes)
	I1025 20:44:15.269820    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/certs/key.pem (1679 bytes)
	I1025 20:44:15.269881    9122 certs.go:388] found cert: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem (1708 bytes)
	I1025 20:44:15.269914    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem -> /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.269932    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.269950    9122 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem -> /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.270326    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 20:44:15.287336    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 20:44:15.303765    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 20:44:15.320192    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 20:44:15.336782    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/ssl/certs/29162.pem --> /usr/share/ca-certificates/29162.pem (1708 bytes)
	I1025 20:44:15.353371    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 20:44:15.370054    9122 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/2916.pem --> /usr/share/ca-certificates/2916.pem (1338 bytes)
	I1025 20:44:15.386711    9122 ssh_runner.go:195] Run: openssl version
	I1025 20:44:15.391381    9122 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1025 20:44:15.391745    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 20:44:15.399021    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.402726    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.402951    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 26 03:18 /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.402998    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 20:44:15.407823    9122 command_runner.go:130] > b5213941
	I1025 20:44:15.408133    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 20:44:15.415053    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2916.pem && ln -fs /usr/share/ca-certificates/2916.pem /etc/ssl/certs/2916.pem"
	I1025 20:44:15.423389    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.427055    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.427126    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 26 03:22 /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.427163    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2916.pem
	I1025 20:44:15.431956    9122 command_runner.go:130] > 51391683
	I1025 20:44:15.432270    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2916.pem /etc/ssl/certs/51391683.0"
	I1025 20:44:15.439614    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29162.pem && ln -fs /usr/share/ca-certificates/29162.pem /etc/ssl/certs/29162.pem"
	I1025 20:44:15.447130    9122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.450687    9122 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.450749    9122 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 26 03:22 /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.450794    9122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29162.pem
	I1025 20:44:15.455626    9122 command_runner.go:130] > 3ec20f2e
	I1025 20:44:15.455978    9122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29162.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 20:44:15.462987    9122 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 20:44:15.527725    9122 command_runner.go:130] > systemd
	I1025 20:44:15.529863    9122 cni.go:95] Creating CNI manager for ""
	I1025 20:44:15.529874    9122 cni.go:156] 2 nodes found, recommending kindnet
	I1025 20:44:15.529887    9122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 20:44:15.529902    9122 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-203818 NodeName:multinode-203818-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 20:44:15.529985    9122 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-203818-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 20:44:15.530034    9122 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-203818-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 20:44:15.530092    9122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 20:44:15.536939    9122 command_runner.go:130] > kubeadm
	I1025 20:44:15.536947    9122 command_runner.go:130] > kubectl
	I1025 20:44:15.536951    9122 command_runner.go:130] > kubelet
	I1025 20:44:15.537650    9122 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 20:44:15.537697    9122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1025 20:44:15.544541    9122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1025 20:44:15.556554    9122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 20:44:15.568700    9122 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 20:44:15.572493    9122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 20:44:15.581924    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:15.582093    9122 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:44:15.582090    9122 start.go:286] JoinCluster: &{Name:multinode-203818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-203818 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:44:15.582174    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1025 20:44:15.582218    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:15.646255    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:15.774631    9122 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 
	I1025 20:44:15.778620    9122 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:15.778648    9122 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:44:15.778864    9122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-203818-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1025 20:44:15.778906    9122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:44:15.842793    9122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51341 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:44:15.985149    9122 command_runner.go:130] > node/multinode-203818-m02 cordoned
	I1025 20:44:19.003270    9122 command_runner.go:130] > pod "busybox-65db55d5d6-jf8jp" has DeletionTimestamp older than 1 seconds, skipping
	I1025 20:44:19.003290    9122 command_runner.go:130] > node/multinode-203818-m02 drained
	I1025 20:44:19.006692    9122 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1025 20:44:19.006713    9122 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-q9qv5, kube-system/kube-proxy-j799s
	I1025 20:44:19.006734    9122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-203818-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.227854199s)
	I1025 20:44:19.006743    9122 node.go:109] successfully drained node "m02"
	I1025 20:44:19.007037    9122 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:44:19.007225    9122 kapi.go:59] client config for multinode-203818: &rest.Config{Host:"https://127.0.0.1:51345", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.crt", KeyFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/multinode-203818/client.key", CAFile:"/Users/jenkins/minikube-integration/14956-2080/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2341800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 20:44:19.007480    9122 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1025 20:44:19.007506    9122 round_trippers.go:463] DELETE https://127.0.0.1:51345/api/v1/nodes/multinode-203818-m02
	I1025 20:44:19.007510    9122 round_trippers.go:469] Request Headers:
	I1025 20:44:19.007517    9122 round_trippers.go:473]     Accept: application/json, */*
	I1025 20:44:19.007522    9122 round_trippers.go:473]     Content-Type: application/json
	I1025 20:44:19.007527    9122 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 20:44:19.010665    9122 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 20:44:19.010677    9122 round_trippers.go:577] Response Headers:
	I1025 20:44:19.010683    9122 round_trippers.go:580]     Audit-Id: dc7ed9be-6012-45e8-a1d4-601bc5c4655d
	I1025 20:44:19.010688    9122 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 20:44:19.010696    9122 round_trippers.go:580]     Content-Type: application/json
	I1025 20:44:19.010701    9122 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 90d362d6-cb0c-4d3b-9347-abe25d5e10bc
	I1025 20:44:19.010706    9122 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e82f1746-78df-4f2c-91e5-d81fec4dc2f7
	I1025 20:44:19.010712    9122 round_trippers.go:580]     Content-Length: 171
	I1025 20:44:19.010717    9122 round_trippers.go:580]     Date: Wed, 26 Oct 2022 03:44:19 GMT
	I1025 20:44:19.010729    9122 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-203818-m02","kind":"nodes","uid":"7c7037c9-edec-40ae-94ec-6fc8e2997faa"}}
	I1025 20:44:19.010752    9122 node.go:125] successfully deleted node "m02"
	I1025 20:44:19.010758    9122 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:19.010771    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:19.010781    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:44:19.046970    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:44:19.156652    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:44:19.156670    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 20:44:19.174304    9122 command_runner.go:130] ! W1026 03:44:19.053625    1094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:44:19.174317    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:44:19.174328    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:44:19.174334    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:44:19.174339    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:44:19.174347    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:44:19.174357    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:44:19.174364    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1025 20:44:19.174393    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:19.053625    1094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:19.174407    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:44:19.174416    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:44:19.210642    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:44:19.210658    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:19.210681    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:19.210708    9122 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:19.053625    1094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.257632    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:30.257757    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:44:30.292636    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:44:30.389620    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:44:30.389636    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 20:44:30.408437    9122 command_runner.go:130] ! W1026 03:44:30.307581    1763 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:44:30.408457    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:44:30.408465    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:44:30.408471    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:44:30.408477    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:44:30.408483    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:44:30.408492    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:44:30.408500    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1025 20:44:30.408537    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:30.307581    1763 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.408544    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:44:30.408552    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:44:30.443439    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:44:30.443453    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.443468    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:30.443478    9122 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:30.307581    1763 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.051305    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:44:52.051347    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:44:52.086855    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:44:52.182414    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:44:52.182431    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 20:44:52.200778    9122 command_runner.go:130] ! W1026 03:44:52.095980    2007 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:44:52.200792    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:44:52.200803    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:44:52.200808    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:44:52.200812    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:44:52.200818    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:44:52.200829    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:44:52.200836    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1025 20:44:52.200866    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:52.095980    2007 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.200877    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:44:52.200885    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:44:52.235377    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:44:52.235390    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.235413    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:44:52.235424    9122 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:44:52.095980    2007 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.438414    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:45:18.438478    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:45:18.474545    9122 command_runner.go:130] ! W1026 03:45:18.481202    2273 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:45:18.474818    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:45:18.498826    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:45:18.505886    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:45:18.561737    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:45:18.561751    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:45:18.586966    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:45:18.586979    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.590178    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:45:18.590190    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:45:18.590203    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1025 20:45:18.590234    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:18.481202    2273 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.590244    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:45:18.590255    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:45:18.626579    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:45:18.626595    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.626609    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:18.626619    9122 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:18.481202    2273 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.275205    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:45:50.275286    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:45:50.310625    9122 command_runner.go:130] ! W1026 03:45:50.318493    2605 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:45:50.310643    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:45:50.332848    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:45:50.337566    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:45:50.392780    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:45:50.392793    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:45:50.418654    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:45:50.418666    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.421800    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:45:50.421813    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:45:50.421820    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1025 20:45:50.421847    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:50.318493    2605 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.421857    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:45:50.421874    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:45:50.460197    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:45:50.460213    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.460233    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:45:50.460244    9122 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:45:50.318493    2605 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.271767    9122 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 20:46:37.271843    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02"
	I1025 20:46:37.307265    9122 command_runner.go:130] ! W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 20:46:37.307283    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1025 20:46:37.330921    9122 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 20:46:37.337123    9122 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1025 20:46:37.396912    9122 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1025 20:46:37.396930    9122 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1025 20:46:37.423054    9122 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1025 20:46:37.423067    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.426213    9122 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 20:46:37.426226    9122 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 20:46:37.426236    9122 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1025 20:46:37.426269    9122 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.426277    9122 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1025 20:46:37.426285    9122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1025 20:46:37.464275    9122 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1025 20:46:37.464291    9122 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.464306    9122 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1025 20:46:37.464323    9122 start.go:288] JoinCluster complete in 2m21.882148659s
	I1025 20:46:37.486388    9122 out.go:177] 
	W1025 20:46:37.507422    9122 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 587gq9.polv07hxk2mlly36 --discovery-token-ca-cert-hash sha256:4c78d04b074c9d8ff5b9d93ea7526f2ac58bd2deb6968dc75d2e26408b7b8c71 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-203818-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1026 03:46:37.327327    3042 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-203818-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 20:46:37.507454    9122 out.go:239] * 
	W1025 20:46:37.508746    9122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:46:37.594287    9122 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-10-26 03:43:38 UTC, end at Wed 2022-10-26 03:46:39 UTC. --
	Oct 26 03:43:40 multinode-203818 dockerd[131]: time="2022-10-26T03:43:40.725193202Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 26 03:43:40 multinode-203818 dockerd[131]: time="2022-10-26T03:43:40.725703907Z" level=info msg="Daemon shutdown complete"
	Oct 26 03:43:40 multinode-203818 systemd[1]: docker.service: Succeeded.
	Oct 26 03:43:40 multinode-203818 systemd[1]: Stopped Docker Application Container Engine.
	Oct 26 03:43:40 multinode-203818 systemd[1]: docker.service: Consumed 1.089s CPU time.
	Oct 26 03:43:40 multinode-203818 systemd[1]: Starting Docker Application Container Engine...
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.772280350Z" level=info msg="Starting up"
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.773803424Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.773836037Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.773878747Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.773886993Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.774965921Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.775002285Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.775014800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.775020875Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.778154274Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.783450254Z" level=info msg="Loading containers: start."
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.889010184Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.923319870Z" level=info msg="Loading containers: done."
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.934900228Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.934976608Z" level=info msg="Daemon has completed initialization"
	Oct 26 03:43:40 multinode-203818 systemd[1]: Started Docker Application Container Engine.
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.960922188Z" level=info msg="API listen on [::]:2376"
	Oct 26 03:43:40 multinode-203818 dockerd[662]: time="2022-10-26T03:43:40.963609102Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 03:44:22 multinode-203818 dockerd[662]: time="2022-10-26T03:44:22.894195367Z" level=info msg="ignoring event" container=bfac4a3b6563f338e1db6a9b1e67a887f2a70700fe8e2b1970ae9c83d087ee23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	3bcdfd3001ba0       6e38f40d628db       2 minutes ago       Running             storage-provisioner       3                   3f36fff4de245
	803751fcba647       5185b96f0becf       2 minutes ago       Running             coredns                   2                   91fd8ea7c6b52
	c1946d1538a88       beaaf00edd38a       2 minutes ago       Running             kube-proxy                2                   76be555bdd3b2
	f14a61275e07e       d6e3e26021b60       2 minutes ago       Running             kindnet-cni               2                   58c17caf960eb
	bfac4a3b6563f       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       2                   3f36fff4de245
	2f502970fd16f       8c811b4aec35f       2 minutes ago       Running             busybox                   2                   0c4cd6f644dd5
	6a2890ebe9bcc       0346dbd74bcb9       2 minutes ago       Running             kube-apiserver            2                   f464912448866
	3b0babd0cff73       a8a176a5d5d69       2 minutes ago       Running             etcd                      2                   46c807d4343a0
	11f0233e02765       6039992312758       2 minutes ago       Running             kube-controller-manager   2                   affbf2135957f
	50315ab330f3a       6d23ec0e8b87e       2 minutes ago       Running             kube-scheduler            2                   601d8d76d954c
	a76713468a8e1       5185b96f0becf       4 minutes ago       Exited              coredns                   1                   c5b570db3f972
	bf7b5ebb864d9       d6e3e26021b60       4 minutes ago       Exited              kindnet-cni               1                   d412a631e4ae3
	0f2007f45d4f2       8c811b4aec35f       4 minutes ago       Exited              busybox                   1                   7a083af363309
	901030c096733       beaaf00edd38a       4 minutes ago       Exited              kube-proxy                1                   fa258b141e90b
	3494771f98f11       0346dbd74bcb9       4 minutes ago       Exited              kube-apiserver            1                   aa702be3519ca
	acf347f03ed91       a8a176a5d5d69       4 minutes ago       Exited              etcd                      1                   6e35a55843e18
	c0ffc4ed686c8       6d23ec0e8b87e       4 minutes ago       Exited              kube-scheduler            1                   6578e02f60a43
	29a55c918cc05       6039992312758       4 minutes ago       Exited              kube-controller-manager   1                   34b369462e062
	
	* 
	* ==> coredns [803751fcba64] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [a76713468a8e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-203818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-203818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a202e21b7dfdf03a7523ceebf3573bc3065a5a1a
	                    minikube.k8s.io/name=multinode-203818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_10_25T20_38_46_0700
	                    minikube.k8s.io/version=v1.27.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Oct 2022 03:38:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-203818
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Oct 2022 03:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Oct 2022 03:43:50 +0000   Wed, 26 Oct 2022 03:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Oct 2022 03:43:50 +0000   Wed, 26 Oct 2022 03:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Oct 2022 03:43:50 +0000   Wed, 26 Oct 2022 03:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Oct 2022 03:43:50 +0000   Wed, 26 Oct 2022 03:39:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-203818
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 18f31d64397c45b9b9d6ac880da4e8a3
	  System UUID:                a23b3391-71c6-4c44-88a7-40a93514124f
	  Boot ID:                    b3896f5a-5b30-406c-b85d-7cc2a48c4237
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.18
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-h6pzg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 coredns-565d847f94-tvhv6                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     7m40s
	  kube-system                 etcd-multinode-203818                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m53s
	  kube-system                 kindnet-8xvrw                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m41s
	  kube-system                 kube-apiserver-multinode-203818             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 kube-controller-manager-multinode-203818    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-proxy-48p2l                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-scheduler-multinode-203818             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m40s                  kube-proxy       
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m4s (x5 over 8m4s)    kubelet          Node multinode-203818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m4s (x5 over 8m4s)    kubelet          Node multinode-203818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m4s (x4 over 8m4s)    kubelet          Node multinode-203818 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m53s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m53s                  kubelet          Node multinode-203818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m53s                  kubelet          Node multinode-203818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s                  kubelet          Node multinode-203818 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m41s                  node-controller  Node multinode-203818 event: Registered Node multinode-203818 in Controller
	  Normal  NodeReady                7m32s                  kubelet          Node multinode-203818 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    4m48s (x8 over 4m48s)  kubelet          Node multinode-203818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m48s (x8 over 4m48s)  kubelet          Node multinode-203818 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m48s (x7 over 4m48s)  kubelet          Node multinode-203818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m32s                  node-controller  Node multinode-203818 event: Registered Node multinode-203818 in Controller
	  Normal  Starting                 2m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m53s (x8 over 2m53s)  kubelet          Node multinode-203818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x8 over 2m53s)  kubelet          Node multinode-203818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x7 over 2m53s)  kubelet          Node multinode-203818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m36s                  node-controller  Node multinode-203818 event: Registered Node multinode-203818 in Controller
	
	
	Name:               multinode-203818-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-203818-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Oct 2022 03:44:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-203818-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Oct 2022 03:46:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Oct 2022 03:44:19 +0000   Wed, 26 Oct 2022 03:44:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Oct 2022 03:44:19 +0000   Wed, 26 Oct 2022 03:44:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Oct 2022 03:44:19 +0000   Wed, 26 Oct 2022 03:44:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Oct 2022 03:44:19 +0000   Wed, 26 Oct 2022 03:44:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-203818-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086512Ki
	  pods:               110
	System Info:
	  Machine ID:                 18f31d64397c45b9b9d6ac880da4e8a3
	  System UUID:                b3653781-3adc-43c6-b92b-d74b2a052528
	  Boot ID:                    b3896f5a-5b30-406c-b85d-7cc2a48c4237
	  Kernel Version:             5.10.124-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.18
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-ttlxp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kindnet-q9qv5               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m18s
	  kube-system                 kube-proxy-j799s            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m14s                  kube-proxy  
	  Normal  Starting                 2m18s                  kube-proxy  
	  Normal  Starting                 4m16s                  kube-proxy  
	  Normal  Starting                 7m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m19s (x2 over 7m19s)  kubelet     Node multinode-203818-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x2 over 7m19s)  kubelet     Node multinode-203818-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x2 over 7m19s)  kubelet     Node multinode-203818-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-203818-m02 status is now: NodeReady
	  Normal  NodeHasSufficientPID     4m19s (x2 over 4m19s)  kubelet     Node multinode-203818-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m19s (x2 over 4m19s)  kubelet     Node multinode-203818-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m19s (x2 over 4m19s)  kubelet     Node multinode-203818-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m19s                  kubelet     Starting kubelet.
	  Normal  NodeReady                4m8s                   kubelet     Node multinode-203818-m02 status is now: NodeReady
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m20s (x7 over 2m27s)  kubelet     Node multinode-203818-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x7 over 2m27s)  kubelet     Node multinode-203818-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m27s)  kubelet     Node multinode-203818-m02 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001450] FS-Cache: O-key=[8] '8641990300000000'
	[  +0.001119] FS-Cache: N-cookie c=00000000fbd99b07 [p=00000000830b9f83 fl=2 nc=0 na=1]
	[  +0.001795] FS-Cache: N-cookie d=00000000912eb0a9 n=00000000785613f5
	[  +0.001458] FS-Cache: N-key=[8] '8641990300000000'
	[  +0.002064] FS-Cache: Duplicate cookie detected
	[  +0.001005] FS-Cache: O-cookie c=00000000b901a70b [p=00000000830b9f83 fl=226 nc=0 na=1]
	[  +0.001776] FS-Cache: O-cookie d=00000000912eb0a9 n=000000009bbc4021
	[  +0.001490] FS-Cache: O-key=[8] '8641990300000000'
	[  +0.001113] FS-Cache: N-cookie c=00000000fbd99b07 [p=00000000830b9f83 fl=2 nc=0 na=1]
	[  +0.001738] FS-Cache: N-cookie d=00000000912eb0a9 n=0000000048f230f2
	[  +0.001423] FS-Cache: N-key=[8] '8641990300000000'
	[  +3.468856] FS-Cache: Duplicate cookie detected
	[  +0.001027] FS-Cache: O-cookie c=000000002addf063 [p=00000000830b9f83 fl=226 nc=0 na=1]
	[  +0.001760] FS-Cache: O-cookie d=00000000912eb0a9 n=000000007e91936d
	[  +0.001414] FS-Cache: O-key=[8] '8541990300000000'
	[  +0.001115] FS-Cache: N-cookie c=00000000a0bf947d [p=00000000830b9f83 fl=2 nc=0 na=1]
	[  +0.001729] FS-Cache: N-cookie d=00000000912eb0a9 n=00000000403718fe
	[  +0.001413] FS-Cache: N-key=[8] '8541990300000000'
	[  +0.420305] FS-Cache: Duplicate cookie detected
	[  +0.001040] FS-Cache: O-cookie c=000000002112ffc9 [p=00000000830b9f83 fl=226 nc=0 na=1]
	[  +0.001748] FS-Cache: O-cookie d=00000000912eb0a9 n=00000000c3587e1b
	[  +0.001449] FS-Cache: O-key=[8] '8f41990300000000'
	[  +0.001094] FS-Cache: N-cookie c=00000000c518cc64 [p=00000000830b9f83 fl=2 nc=0 na=1]
	[  +0.001759] FS-Cache: N-cookie d=00000000912eb0a9 n=0000000048f230f2
	[  +0.001431] FS-Cache: N-key=[8] '8f41990300000000'
	
	* 
	* ==> etcd [3b0babd0cff7] <==
	* {"level":"info","ts":"2022-10-26T03:43:47.982Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-10-26T03:43:47.982Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-10-26T03:43:47.982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-10-26T03:43:47.982Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-10-26T03:43:47.983Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-26T03:43:47.983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-26T03:43:47.984Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-10-26T03:43:47.985Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-10-26T03:43:47.985Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-10-26T03:43:47.985Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-10-26T03:43:47.985Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 3"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 4"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 4"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-203818 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-26T03:43:48.973Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-26T03:43:48.974Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-10-26T03:43:48.974Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-10-26T03:43:48.975Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-10-26T03:43:48.976Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> etcd [acf347f03ed9] <==
	* {"level":"info","ts":"2022-10-26T03:41:52.361Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-10-26T03:41:52.361Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-10-26T03:41:52.361Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-10-26T03:41:53.994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-10-26T03:41:53.994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-10-26T03:41:53.994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-10-26T03:41:53.994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-10-26T03:41:53.994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-10-26T03:41:53.995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-10-26T03:41:53.995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-10-26T03:41:53.997Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-203818 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-10-26T03:41:53.997Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-26T03:41:53.998Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-26T03:41:53.998Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-10-26T03:41:53.998Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-10-26T03:41:53.999Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-10-26T03:41:53.999Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-10-26T03:43:12.144Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-10-26T03:43:12.144Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-203818","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/10/26 03:43:12 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/10/26 03:43:12 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-10-26T03:43:12.152Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-10-26T03:43:12.157Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-10-26T03:43:12.158Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-10-26T03:43:12.158Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-203818","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  03:46:40 up 45 min,  0 users,  load average: 0.57, 0.54, 0.54
	Linux multinode-203818 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [3494771f98f1] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 03:43:12.147446       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 03:43:12.147498       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 03:43:12.147980       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [6a2890ebe9bc] <==
	* I1026 03:43:50.546791       1 controller.go:85] Starting OpenAPI controller
	I1026 03:43:50.546895       1 controller.go:85] Starting OpenAPI V3 controller
	I1026 03:43:50.546928       1 naming_controller.go:291] Starting NamingConditionController
	I1026 03:43:50.546998       1 establishing_controller.go:76] Starting EstablishingController
	I1026 03:43:50.547053       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1026 03:43:50.547090       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1026 03:43:50.547103       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1026 03:43:50.547151       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1026 03:43:50.547157       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1026 03:43:50.577576       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1026 03:43:50.630684       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1026 03:43:50.630778       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1026 03:43:50.645130       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 03:43:50.645129       1 cache.go:39] Caches are synced for autoregister controller
	I1026 03:43:50.645180       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 03:43:50.647426       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1026 03:43:50.657169       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 03:43:51.367573       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1026 03:43:51.543652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 03:43:53.016430       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1026 03:43:53.271663       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1026 03:43:53.278884       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1026 03:43:53.319365       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 03:43:53.369672       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 03:44:52.728403       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [11f0233e0276] <==
	* I1026 03:44:03.098335       1 shared_informer.go:262] Caches are synced for daemon sets
	I1026 03:44:03.099919       1 shared_informer.go:262] Caches are synced for attach detach
	I1026 03:44:03.105476       1 shared_informer.go:262] Caches are synced for persistent volume
	I1026 03:44:03.110234       1 shared_informer.go:262] Caches are synced for taint
	I1026 03:44:03.110309       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W1026 03:44:03.110371       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-203818. Assuming now as a timestamp.
	W1026 03:44:03.110486       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-203818-m02. Assuming now as a timestamp.
	I1026 03:44:03.110506       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1026 03:44:03.110637       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1026 03:44:03.110666       1 taint_manager.go:209] "Sending events to api server"
	I1026 03:44:03.110820       1 event.go:294] "Event occurred" object="multinode-203818" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-203818 event: Registered Node multinode-203818 in Controller"
	I1026 03:44:03.110877       1 event.go:294] "Event occurred" object="multinode-203818-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-203818-m02 event: Registered Node multinode-203818-m02 in Controller"
	I1026 03:44:03.112795       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1026 03:44:03.118524       1 shared_informer.go:262] Caches are synced for GC
	I1026 03:44:03.436835       1 shared_informer.go:262] Caches are synced for garbage collector
	I1026 03:44:03.502439       1 shared_informer.go:262] Caches are synced for garbage collector
	I1026 03:44:03.502499       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1026 03:44:16.013615       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-ttlxp"
	W1026 03:44:19.101319       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m02 node
	W1026 03:44:19.101614       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-203818-m02" does not exist
	I1026 03:44:19.104943       1 range_allocator.go:367] Set node multinode-203818-m02 PodCIDR to [10.244.1.0/24]
	I1026 03:44:43.084807       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-l9tx2"
	I1026 03:44:43.088945       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-l9tx2"
	I1026 03:44:43.088975       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-9j45q"
	I1026 03:44:43.092563       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-9j45q"
	
	* 
	* ==> kube-controller-manager [29a55c918cc0] <==
	* I1026 03:42:07.930625       1 shared_informer.go:262] Caches are synced for namespace
	I1026 03:42:07.970027       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1026 03:42:07.991922       1 shared_informer.go:262] Caches are synced for resource quota
	I1026 03:42:08.011242       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1026 03:42:08.020194       1 shared_informer.go:262] Caches are synced for crt configmap
	I1026 03:42:08.048009       1 shared_informer.go:262] Caches are synced for resource quota
	I1026 03:42:08.364128       1 shared_informer.go:262] Caches are synced for garbage collector
	I1026 03:42:08.434515       1 shared_informer.go:262] Caches are synced for garbage collector
	I1026 03:42:08.434556       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1026 03:42:17.182095       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-dmdtw"
	W1026 03:42:20.185332       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m03 node
	W1026 03:42:20.940422       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m03 node
	W1026 03:42:20.940534       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-203818-m02" does not exist
	I1026 03:42:20.944718       1 range_allocator.go:367] Set node multinode-203818-m02 PodCIDR to [10.244.1.0/24]
	I1026 03:42:24.140859       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-x5dqw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-x5dqw"
	W1026 03:42:31.162231       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m02 node
	I1026 03:42:38.498775       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-jf8jp"
	W1026 03:42:41.488411       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m02 node
	W1026 03:42:42.230327       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m02 node
	W1026 03:42:42.230383       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-203818-m03" does not exist
	I1026 03:42:42.237452       1 range_allocator.go:367] Set node multinode-203818-m03 PodCIDR to [10.244.2.0/24]
	I1026 03:42:44.428826       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-dmdtw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-dmdtw"
	W1026 03:42:52.274914       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m02 node
	W1026 03:42:55.370267       1 topologycache.go:199] Can't get CPU or zone information for multinode-203818-m02 node
	I1026 03:42:57.796576       1 event.go:294] "Event occurred" object="multinode-203818-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-203818-m03 event: Removing Node multinode-203818-m03 from Controller"
	
	* 
	* ==> kube-proxy [901030c09673] <==
	* I1026 03:41:57.162550       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1026 03:41:57.162665       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1026 03:41:57.163286       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1026 03:41:57.188600       1 server_others.go:206] "Using iptables Proxier"
	I1026 03:41:57.188625       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1026 03:41:57.188631       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1026 03:41:57.188639       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1026 03:41:57.188661       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1026 03:41:57.188965       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1026 03:41:57.189180       1 server.go:661] "Version info" version="v1.25.3"
	I1026 03:41:57.189250       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 03:41:57.190284       1 config.go:226] "Starting endpoint slice config controller"
	I1026 03:41:57.190313       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1026 03:41:57.190776       1 config.go:317] "Starting service config controller"
	I1026 03:41:57.191920       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1026 03:41:57.190963       1 config.go:444] "Starting node config controller"
	I1026 03:41:57.192025       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1026 03:41:57.290608       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1026 03:41:57.292902       1 shared_informer.go:262] Caches are synced for service config
	I1026 03:41:57.293184       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c1946d1538a8] <==
	* I1026 03:43:53.227889       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1026 03:43:53.228004       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1026 03:43:53.228045       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1026 03:43:53.288312       1 server_others.go:206] "Using iptables Proxier"
	I1026 03:43:53.288354       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1026 03:43:53.288361       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1026 03:43:53.288371       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1026 03:43:53.288387       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1026 03:43:53.288560       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1026 03:43:53.288712       1 server.go:661] "Version info" version="v1.25.3"
	I1026 03:43:53.288740       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 03:43:53.289251       1 config.go:226] "Starting endpoint slice config controller"
	I1026 03:43:53.289281       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1026 03:43:53.289258       1 config.go:317] "Starting service config controller"
	I1026 03:43:53.289348       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1026 03:43:53.289405       1 config.go:444] "Starting node config controller"
	I1026 03:43:53.289409       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1026 03:43:53.389385       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1026 03:43:53.389444       1 shared_informer.go:262] Caches are synced for service config
	I1026 03:43:53.389507       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [50315ab330f3] <==
	* I1026 03:43:47.311745       1 serving.go:348] Generated self-signed cert in-memory
	W1026 03:43:50.547360       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 03:43:50.547487       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 03:43:50.547518       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 03:43:50.547525       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 03:43:50.572651       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1026 03:43:50.572684       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 03:43:50.574939       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 03:43:50.574959       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 03:43:50.574998       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 03:43:50.575035       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 03:43:50.675627       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c0ffc4ed686c] <==
	* I1026 03:41:52.804463       1 serving.go:348] Generated self-signed cert in-memory
	W1026 03:41:55.570071       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 03:41:55.570624       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 03:41:55.570639       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 03:41:55.570644       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 03:41:55.578169       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1026 03:41:55.578201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 03:41:55.579037       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 03:41:55.579056       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 03:41:55.579062       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 03:41:55.579069       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 03:41:55.679820       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 03:43:12.149408       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1026 03:43:12.149503       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I1026 03:43:12.149552       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1026 03:43:12.151523       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-10-26 03:43:38 UTC, end at Wed 2022-10-26 03:46:41 UTC. --
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.114236    1225 topology_manager.go:205] "Topology Admit Handler"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.114262    1225 topology_manager.go:205] "Topology Admit Handler"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.115042    1225 topology_manager.go:205] "Topology Admit Handler"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.191222    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf96a572-bbca-4af2-bd3e-7d377772cef4-lib-modules\") pod \"kube-proxy-48p2l\" (UID: \"cf96a572-bbca-4af2-bd3e-7d377772cef4\") " pod="kube-system/kube-proxy-48p2l"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.191723    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07-xtables-lock\") pod \"kindnet-8xvrw\" (UID: \"a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07\") " pod="kube-system/kindnet-8xvrw"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192000    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf96a572-bbca-4af2-bd3e-7d377772cef4-xtables-lock\") pod \"kube-proxy-48p2l\" (UID: \"cf96a572-bbca-4af2-bd3e-7d377772cef4\") " pod="kube-system/kube-proxy-48p2l"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192171    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfhf\" (UniqueName: \"kubernetes.io/projected/474fe8dd-8e45-48d2-bb4b-dc85075bac03-kube-api-access-7vfhf\") pod \"busybox-65db55d5d6-h6pzg\" (UID: \"474fe8dd-8e45-48d2-bb4b-dc85075bac03\") " pod="default/busybox-65db55d5d6-h6pzg"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192294    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07-lib-modules\") pod \"kindnet-8xvrw\" (UID: \"a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07\") " pod="kube-system/kindnet-8xvrw"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192453    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c89eabb7-66d0-469a-8966-ceeb6f9b215e-config-volume\") pod \"coredns-565d847f94-tvhv6\" (UID: \"c89eabb7-66d0-469a-8966-ceeb6f9b215e\") " pod="kube-system/coredns-565d847f94-tvhv6"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192563    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725lt\" (UniqueName: \"kubernetes.io/projected/a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07-kube-api-access-725lt\") pod \"kindnet-8xvrw\" (UID: \"a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07\") " pod="kube-system/kindnet-8xvrw"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192680    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v844w\" (UniqueName: \"kubernetes.io/projected/93c13130-1e73-4433-b82f-b565797df5c6-kube-api-access-v844w\") pod \"storage-provisioner\" (UID: \"93c13130-1e73-4433-b82f-b565797df5c6\") " pod="kube-system/storage-provisioner"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192792    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqzj\" (UniqueName: \"kubernetes.io/projected/cf96a572-bbca-4af2-bd3e-7d377772cef4-kube-api-access-tbqzj\") pod \"kube-proxy-48p2l\" (UID: \"cf96a572-bbca-4af2-bd3e-7d377772cef4\") " pod="kube-system/kube-proxy-48p2l"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.192938    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mq5\" (UniqueName: \"kubernetes.io/projected/c89eabb7-66d0-469a-8966-ceeb6f9b215e-kube-api-access-v8mq5\") pod \"coredns-565d847f94-tvhv6\" (UID: \"c89eabb7-66d0-469a-8966-ceeb6f9b215e\") " pod="kube-system/coredns-565d847f94-tvhv6"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.193016    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf96a572-bbca-4af2-bd3e-7d377772cef4-kube-proxy\") pod \"kube-proxy-48p2l\" (UID: \"cf96a572-bbca-4af2-bd3e-7d377772cef4\") " pod="kube-system/kube-proxy-48p2l"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.193133    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07-cni-cfg\") pod \"kindnet-8xvrw\" (UID: \"a5ce36ab-ab4d-49e8-a9bc-5d9b42c70c07\") " pod="kube-system/kindnet-8xvrw"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.193243    1225 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/93c13130-1e73-4433-b82f-b565797df5c6-tmp\") pod \"storage-provisioner\" (UID: \"93c13130-1e73-4433-b82f-b565797df5c6\") " pod="kube-system/storage-provisioner"
	Oct 26 03:43:51 multinode-203818 kubelet[1225]: I1026 03:43:51.193264    1225 reconciler.go:169] "Reconciler: start to sync state"
	Oct 26 03:43:52 multinode-203818 kubelet[1225]: I1026 03:43:52.309197    1225 request.go:682] Waited for 1.010931579s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Oct 26 03:43:52 multinode-203818 kubelet[1225]: I1026 03:43:52.805073    1225 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="58c17caf960eba4ad6900405153d6dff63f2477ca76c08f3ceb491aff23a7c94"
	Oct 26 03:43:52 multinode-203818 kubelet[1225]: I1026 03:43:52.891903    1225 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3f36fff4de245e44a79ebaa0eed0f3355fe6aa33dcb452cd0104bd41909d3c9b"
	Oct 26 03:43:54 multinode-203818 kubelet[1225]: I1026 03:43:54.943944    1225 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 03:44:23 multinode-203818 kubelet[1225]: I1026 03:44:23.135318    1225 scope.go:115] "RemoveContainer" containerID="6e75fc801378cb9280b44f2bc96ebe1a62f195afcca5b1e68d9f3ed8724619ea"
	Oct 26 03:44:23 multinode-203818 kubelet[1225]: I1026 03:44:23.135520    1225 scope.go:115] "RemoveContainer" containerID="bfac4a3b6563f338e1db6a9b1e67a887f2a70700fe8e2b1970ae9c83d087ee23"
	Oct 26 03:44:23 multinode-203818 kubelet[1225]: E1026 03:44:23.135629    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(93c13130-1e73-4433-b82f-b565797df5c6)\"" pod="kube-system/storage-provisioner" podUID=93c13130-1e73-4433-b82f-b565797df5c6
	Oct 26 03:44:35 multinode-203818 kubelet[1225]: I1026 03:44:35.257950    1225 scope.go:115] "RemoveContainer" containerID="bfac4a3b6563f338e1db6a9b1e67a887f2a70700fe8e2b1970ae9c83d087ee23"
	
	* 
	* ==> storage-provisioner [3bcdfd3001ba] <==
	* I1026 03:44:35.345577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 03:44:35.352114       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 03:44:35.352158       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 03:44:52.729782       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 03:44:52.729918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-203818_2187c958-28be-4dae-a0e9-bb6d24f9c891!
	I1026 03:44:52.729949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"63457b82-14ab-4775-a6f4-eb5a9e4d635a", APIVersion:"v1", ResourceVersion:"1161", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-203818_2187c958-28be-4dae-a0e9-bb6d24f9c891 became leader
	I1026 03:44:52.830206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-203818_2187c958-28be-4dae-a0e9-bb6d24f9c891!
	
	* 
	* ==> storage-provisioner [bfac4a3b6563] <==
	* I1026 03:43:52.892905       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 03:44:22.878530       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-203818 -n multinode-203818
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-203818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-203818 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context multinode-203818 describe pod : exit status 1 (65.053234ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context multinode-203818 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/RestartMultiNode (185.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1868.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2864979334.exe start -p running-upgrade-205554 --memory=2200 --vm-driver=docker 
E1025 20:57:04.127113    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.132882    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.143874    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.164877    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.205343    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.286648    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.446844    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:04.768867    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:05.411107    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:06.692769    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:09.254929    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:14.376920    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:24.618930    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:57:45.100902    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2864979334.exe start -p running-upgrade-205554 --memory=2200 --vm-driver=docker : exit status 70 (8m23.221172141s)

                                                
                                                
-- stdout --
	* [running-upgrade-205554] minikube v1.9.0 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2328302921
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: c0070b09a219a7d04a87a01650268cc41e706fcab2af97ba10ae6079b6f83440
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "running-upgrade-205554" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: c7eedd4e26a174f00e56b256f9508f87d3ef66de81dbea05c0fefff895256d59
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p running-upgrade-205554", then "minikube start -p running-upgrade-205554 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: c7eedd4e26a174f00e56b256f9508f87d3ef66de81dbea05c0fefff895256d59
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2864979334.exe start -p running-upgrade-205554 --memory=2200 --vm-driver=docker 
E1025 21:05:12.436822    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:05:12.973507    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 21:07:04.147221    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2864979334.exe start -p running-upgrade-205554 --memory=2200 --vm-driver=docker : exit status 70 (12m54.557609231s)

                                                
                                                
-- stdout --
	* [running-upgrade-205554] minikube v1.9.0 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig172157363
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-205554" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 6c78867d2bda029c448e90cb6395923c0c995004e9445f3dcc393fe4293555e0
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "running-upgrade-205554" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: b688799c8370848f4a96595672a384a7c4ed078bc239d439a2244e5fe24209e8
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p running-upgrade-205554", then "minikube start -p running-upgrade-205554 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: b688799c8370848f4a96595672a384a7c4ed078bc239d439a2244e5fe24209e8
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2864979334.exe start -p running-upgrade-205554 --memory=2200 --vm-driver=docker 
E1025 21:19:56.051979    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 21:20:12.457229    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:20:12.992114    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2864979334.exe start -p running-upgrade-205554 --memory=2200 --vm-driver=docker : exit status 70 (9m45.548210118s)

                                                
                                                
-- stdout --
	* [running-upgrade-205554] minikube v1.9.0 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig726530841
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-205554" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 1a348b07d5fbddf20bd4e3d2318c745ce2176f89c7d73dd58b40352aa1f0730f
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "running-upgrade-205554" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: c783ae83f81312e0ff5accf93020ea0a03d9b4fb79026b7c54da7f4854c39418
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p running-upgrade-205554", then "minikube start -p running-upgrade-205554 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-205554 --name running-upgrade-205554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-205554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-205554 --volume running-upgrade-205554:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: c783ae83f81312e0ff5accf93020ea0a03d9b4fb79026b7c54da7f4854c39418
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2022-10-25 21:27:00.939444 -0700 PDT m=+4178.556894908
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-205554
helpers_test.go:235: (dbg) docker inspect running-upgrade-205554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c783ae83f81312e0ff5accf93020ea0a03d9b4fb79026b7c54da7f4854c39418",
	        "Created": "2022-10-26T04:26:58.782952381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "connection error: desc = \"transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused\": unavailable",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/c783ae83f81312e0ff5accf93020ea0a03d9b4fb79026b7c54da7f4854c39418/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c783ae83f81312e0ff5accf93020ea0a03d9b4fb79026b7c54da7f4854c39418/hostname",
	        "HostsPath": "/var/lib/docker/containers/c783ae83f81312e0ff5accf93020ea0a03d9b4fb79026b7c54da7f4854c39418/hosts",
	        "LogPath": "",
	        "Name": "/running-upgrade-205554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-205554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1289ad7ebc1caa6fe9c43898ea21d50bc828987e5ea9b26bf0451d15a67c02ca-init/diff:/var/lib/docker/overlay2/36916057cbc1f2306b34625139cd40ed853449182b5da3224843a94be0fd11d9/diff:/var/lib/docker/overlay2/398727bf70fe3afa9b28db71f8af53d993dc4735cdc5b8f651b79361732e5251/diff:/var/lib/docker/overlay2/d7b22dbee788e7f1a64ab8ceae2d2a034207a6dc22f677c029b9ca9bfdbdb53c/diff:/var/lib/docker/overlay2/39c39c3bc965f83f87c0350d26c11451c30ac39f168aed5590c2ad0eb65dbd52/diff:/var/lib/docker/overlay2/98eab6f4edb27e3de3dc6ba28aeb912a58ce59a25a3fb89d3e5209d0f7b9a2c1/diff:/var/lib/docker/overlay2/58838845f7ffc03ae12adc13913179dfd461c9322d8cf7014ed62111fefbad5b/diff:/var/lib/docker/overlay2/07fba395b9e91304a8169f93d3b1a5593549e5421815d443d7ae1cac5cd255b0/diff:/var/lib/docker/overlay2/e84d849dc7a51d218c9525b76f525ae2659d62f90a6c42ad487031cfeb4b100d/diff:/var/lib/docker/overlay2/2ce15a4e33aa1f2db4bf73b234b76b3181dce47f76962cc66d11d1c28db5ef2c/diff:/var/lib/docker/overlay2/2711bc
9b96ed4b592005d0acaff1a65535c3a4683465f9560b45598858faa2b0/diff:/var/lib/docker/overlay2/3a205b9af960cc90731ce14841354bed427a26295263d8c705ebc3ea36c15197/diff:/var/lib/docker/overlay2/0cf0d625eede3b724562fdebe8023706013367b9e9458199e2b79d9e3cc3c6e4/diff:/var/lib/docker/overlay2/9c8ea4d816e132e6924410a3745de9cd12a523acdcc0a9a0a288bd5ac2178665/diff:/var/lib/docker/overlay2/9d4e2219cbee899c462008264c21d0bb6b00bcf18fde87451e100c3e3a27a597/diff:/var/lib/docker/overlay2/c973f91c31577cdae1af99537e09b9acb638714dbfb8c66e86ad4bf40e7c60c0/diff:/var/lib/docker/overlay2/8a7f48e3e185b509c767ee53fd52a78b596135bc32c0d7f3b9cffb921bb9f32b/diff:/var/lib/docker/overlay2/4c2378b59acca95153176610a97194c2da7389d4fb70b42f7db26394bc65e67e/diff:/var/lib/docker/overlay2/426fa7925c57dd7d59cf3154cd982a06f39959a9f60e060a907df83a2fdd9f3f/diff:/var/lib/docker/overlay2/43201db72043ffc68f4cee0bec0d673a058f98faf05dfbc6f13c3432ac8358f5/diff:/var/lib/docker/overlay2/bcc588c2679bbfa17723e08b01afd6175b0fc589942d239247caaeb978997f93/diff:/var/lib/d
ocker/overlay2/9431d58668d66cdda479f66519006cf2b64a1caebfb5da7d4e2b32c1711560b4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1289ad7ebc1caa6fe9c43898ea21d50bc828987e5ea9b26bf0451d15a67c02ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1289ad7ebc1caa6fe9c43898ea21d50bc828987e5ea9b26bf0451d15a67c02ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1289ad7ebc1caa6fe9c43898ea21d50bc828987e5ea9b26bf0451d15a67c02ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-205554",
	                "Source": "/var/lib/docker/volumes/running-upgrade-205554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-205554",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-205554",
	                "name.minikube.sigs.k8s.io": "running-upgrade-205554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e233b1beacab5287d8d7d926489fc5079700a9cd871356c05bb10cde9f5026c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/4e233b1beaca",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "38b0fad705775dc2308169ed795c185541c7cdc32abfbcb72d9500391b505c7f",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-205554 -n running-upgrade-205554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-205554 -n running-upgrade-205554: exit status 7 (114.993091ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-205554" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-205554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-205554
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-205554: (1.961209195s)
--- FAIL: TestRunningBinaryUpgrade (1868.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (55.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-205321 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-205321 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 80 (39.748664497s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-205321] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-205321 in cluster kubernetes-upgrade-205321
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-205321" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:53:21.322758   11211 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:53:21.322893   11211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:53:21.322898   11211 out.go:309] Setting ErrFile to fd 2...
	I1025 20:53:21.322901   11211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:53:21.323022   11211 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:53:21.323533   11211 out.go:303] Setting JSON to false
	I1025 20:53:21.341345   11211 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3170,"bootTime":1666753231,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:53:21.341442   11211 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:53:21.364012   11211 out.go:177] * [kubernetes-upgrade-205321] minikube v1.27.1 on Darwin 12.6
	I1025 20:53:21.405458   11211 notify.go:220] Checking for updates...
	I1025 20:53:21.426669   11211 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:53:21.447663   11211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:53:21.468489   11211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:53:21.489706   11211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:53:21.510708   11211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:53:21.531949   11211 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 20:53:21.532006   11211 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:53:21.617077   11211 docker.go:137] docker version: linux-20.10.17
	I1025 20:53:21.617235   11211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:53:21.774186   11211 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:62 SystemTime:2022-10-26 03:53:21.692494531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:53:21.796157   11211 out.go:177] * Using the docker driver based on user configuration
	I1025 20:53:21.816834   11211 start.go:282] selected driver: docker
	I1025 20:53:21.816848   11211 start.go:808] validating driver "docker" against <nil>
	I1025 20:53:21.816863   11211 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:53:21.819641   11211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:53:21.972560   11211 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:62 SystemTime:2022-10-26 03:53:21.900075278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:53:21.972674   11211 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 20:53:21.972852   11211 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 20:53:21.994454   11211 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 20:53:22.015490   11211 cni.go:95] Creating CNI manager for ""
	I1025 20:53:22.015510   11211 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 20:53:22.015530   11211 start_flags.go:317] config:
	{Name:kubernetes-upgrade-205321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-205321 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:53:22.037375   11211 out.go:177] * Starting control plane node kubernetes-upgrade-205321 in cluster kubernetes-upgrade-205321
	I1025 20:53:22.079336   11211 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:53:22.100349   11211 out.go:177] * Pulling base image ...
	I1025 20:53:22.142312   11211 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 20:53:22.142344   11211 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:53:22.142377   11211 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 20:53:22.142392   11211 cache.go:57] Caching tarball of preloaded images
	I1025 20:53:22.142522   11211 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 20:53:22.142534   11211 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 20:53:22.143047   11211 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kubernetes-upgrade-205321/config.json ...
	I1025 20:53:22.143114   11211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kubernetes-upgrade-205321/config.json: {Name:mkdefdd19347dc2026e53dc2eb3899fb0303b953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:53:22.212555   11211 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:53:22.212586   11211 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:53:22.212596   11211 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:53:22.212647   11211 start.go:364] acquiring machines lock for kubernetes-upgrade-205321: {Name:mk08e6c3268915fe3a30b8582f01f341447ad995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:53:22.212846   11211 start.go:368] acquired machines lock for "kubernetes-upgrade-205321" in 185.786µs
	I1025 20:53:22.212872   11211 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-205321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-205321 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 20:53:22.212942   11211 start.go:125] createHost starting for "" (driver="docker")
	I1025 20:53:22.255212   11211 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 20:53:22.255409   11211 start.go:159] libmachine.API.Create for "kubernetes-upgrade-205321" (driver="docker")
	I1025 20:53:22.255439   11211 client.go:168] LocalClient.Create starting
	I1025 20:53:22.255514   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 20:53:22.255550   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:22.255567   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:22.255626   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 20:53:22.255648   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:22.255656   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:22.256114   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 20:53:22.326406   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 20:53:22.326541   11211 network_create.go:272] running [docker network inspect kubernetes-upgrade-205321] to gather additional debugging logs...
	I1025 20:53:22.326564   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321
	W1025 20:53:22.397979   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:22.398018   11211 network_create.go:275] error running [docker network inspect kubernetes-upgrade-205321]: docker network inspect kubernetes-upgrade-205321: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-205321
	I1025 20:53:22.398039   11211 network_create.go:277] output of [docker network inspect kubernetes-upgrade-205321]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-205321
	
	** /stderr **
	I1025 20:53:22.398135   11211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:53:22.467657   11211 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000122330] misses:0}
	I1025 20:53:22.467712   11211 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.467739   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 20:53:22.467902   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	W1025 20:53:22.539587   11211 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:22.539622   11211 network_create.go:107] failed to create docker network kubernetes-upgrade-205321 192.168.49.0/24, will retry: subnet is taken
	I1025 20:53:22.540075   11211 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:false}} dirty:map[] misses:0}
	I1025 20:53:22.540092   11211 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.540295   11211 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8] misses:0}
	I1025 20:53:22.540308   11211 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.540323   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 20:53:22.540400   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	W1025 20:53:22.612915   11211 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:22.612977   11211 network_create.go:107] failed to create docker network kubernetes-upgrade-205321 192.168.58.0/24, will retry: subnet is taken
	I1025 20:53:22.613285   11211 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8] misses:1}
	I1025 20:53:22.613302   11211 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.613524   11211 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] misses:1}
	I1025 20:53:22.613540   11211 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.613555   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 20:53:22.613655   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	I1025 20:53:22.798653   11211 network_create.go:99] docker network kubernetes-upgrade-205321 192.168.67.0/24 created
	I1025 20:53:22.798687   11211 kic.go:106] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-205321" container
	I1025 20:53:22.798783   11211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 20:53:22.864367   11211 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-205321 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --label created_by.minikube.sigs.k8s.io=true
	I1025 20:53:23.237140   11211 oci.go:103] Successfully created a docker volume kubernetes-upgrade-205321
	I1025 20:53:23.237280   11211 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 20:53:23.457492   11211 cli_runner.go:211] docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 20:53:23.457541   11211 client.go:171] LocalClient.Create took 1.202093531s
	I1025 20:53:25.459928   11211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:53:25.460062   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:25.521157   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:25.521254   11211 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:25.799803   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:25.864051   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:25.864142   11211 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:26.406653   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:26.471293   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:26.471377   11211 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.128886   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:27.193035   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:27.193119   11211 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:27.193142   11211 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.193192   11211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:53:27.193252   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:27.254530   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:27.254605   11211 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.486737   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:27.551906   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:27.551991   11211 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.997846   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:28.086959   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:28.087040   11211 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:28.406840   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:28.494195   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:28.494279   11211 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:29.048607   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:29.109496   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:29.109581   11211 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:29.109595   11211 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:29.109607   11211 start.go:128] duration metric: createHost completed in 6.896656149s
	I1025 20:53:29.109615   11211 start.go:83] releasing machines lock for "kubernetes-upgrade-205321", held for 6.896758025s
	W1025 20:53:29.109629   11211 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 20:53:29.110015   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:29.171812   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:29.171853   11211 delete.go:82] Unable to get host status for kubernetes-upgrade-205321, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	W1025 20:53:29.171975   11211 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 20:53:29.171984   11211 start.go:618] Will try again in 5 seconds ...
	I1025 20:53:34.172114   11211 start.go:364] acquiring machines lock for kubernetes-upgrade-205321: {Name:mk08e6c3268915fe3a30b8582f01f341447ad995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:53:34.172220   11211 start.go:368] acquired machines lock for "kubernetes-upgrade-205321" in 84.348µs
	I1025 20:53:34.172238   11211 start.go:96] Skipping create...Using existing machine configuration
	I1025 20:53:34.172246   11211 fix.go:55] fixHost starting: 
	I1025 20:53:34.172442   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.237102   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:34.237153   11211 fix.go:103] recreateIfNeeded on kubernetes-upgrade-205321: state= err=unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.237191   11211 fix.go:108] machineExists: false. err=machine does not exist
	I1025 20:53:34.268859   11211 out.go:177] * docker "kubernetes-upgrade-205321" container is missing, will recreate.
	I1025 20:53:34.310928   11211 delete.go:124] DEMOLISHING kubernetes-upgrade-205321 ...
	I1025 20:53:34.311098   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.372164   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	W1025 20:53:34.372214   11211 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.372238   11211 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.372593   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.433566   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:34.433627   11211 delete.go:82] Unable to get host status for kubernetes-upgrade-205321, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.433715   11211 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-205321
	W1025 20:53:34.494504   11211 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:34.494535   11211 kic.go:356] could not find the container kubernetes-upgrade-205321 to remove it. will try anyways
	I1025 20:53:34.494608   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.555572   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	W1025 20:53:34.555615   11211 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.555699   11211 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-205321 /bin/bash -c "sudo init 0"
	W1025 20:53:34.618081   11211 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-205321 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 20:53:34.618125   11211 oci.go:646] error shutdown kubernetes-upgrade-205321: docker exec --privileged -t kubernetes-upgrade-205321 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:35.618397   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:35.682205   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:35.682269   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:35.682281   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:35.682303   11211 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.083616   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:36.147998   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:36.148054   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.148064   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:36.148083   11211 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.911840   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:36.973493   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:36.973542   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.973551   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:36.973571   11211 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:38.451982   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:38.518066   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:38.518119   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:38.518131   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:38.518151   11211 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:39.723890   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:39.785406   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:39.785457   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:39.785467   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:39.785489   11211 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:42.014944   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:42.077318   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:42.077376   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:42.077393   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:42.077414   11211 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:45.184314   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:45.248279   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:45.248329   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:45.248339   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:45.248358   11211 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:50.768809   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:50.836208   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:50.836255   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:50.836267   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:50.836294   11211 oci.go:88] couldn't shut down kubernetes-upgrade-205321 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	 
	I1025 20:53:50.836364   11211 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-205321
	I1025 20:53:50.898984   11211 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-205321
	W1025 20:53:50.958326   11211 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:50.958444   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:53:51.018871   11211 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-205321
	W1025 20:53:51.131889   11211 delete.go:139] delete failed (probably ok) <nil>
	I1025 20:53:51.131907   11211 fix.go:115] Sleeping 1 second for extra luck!
	I1025 20:53:52.134019   11211 start.go:125] createHost starting for "" (driver="docker")
	I1025 20:53:52.156610   11211 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 20:53:52.156793   11211 start.go:159] libmachine.API.Create for "kubernetes-upgrade-205321" (driver="docker")
	I1025 20:53:52.156837   11211 client.go:168] LocalClient.Create starting
	I1025 20:53:52.157075   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 20:53:52.157161   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:52.157186   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:52.157268   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 20:53:52.157336   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:52.157354   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:52.178835   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 20:53:52.244431   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 20:53:52.244510   11211 network_create.go:272] running [docker network inspect kubernetes-upgrade-205321] to gather additional debugging logs...
	I1025 20:53:52.244535   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321
	W1025 20:53:52.305441   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:52.305461   11211 network_create.go:275] error running [docker network inspect kubernetes-upgrade-205321]: docker network inspect kubernetes-upgrade-205321: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-205321
	I1025 20:53:52.305477   11211 network_create.go:277] output of [docker network inspect kubernetes-upgrade-205321]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-205321
	
	** /stderr **
	I1025 20:53:52.305564   11211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:53:52.367141   11211 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] misses:1}
	I1025 20:53:52.367170   11211 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367400   11211 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] misses:2}
	I1025 20:53:52.367412   11211 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367621   11211 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] amended:false}} dirty:map[] misses:0}
	I1025 20:53:52.367630   11211 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367834   11211 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8 192.168.76.0:0xc000b22588] misses:0}
	I1025 20:53:52.367847   11211 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367854   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 20:53:52.367924   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	I1025 20:53:52.465378   11211 network_create.go:99] docker network kubernetes-upgrade-205321 192.168.76.0/24 created
	I1025 20:53:52.465408   11211 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-205321" container
	I1025 20:53:52.465517   11211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 20:53:52.533696   11211 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-205321 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --label created_by.minikube.sigs.k8s.io=true
	I1025 20:53:52.595267   11211 oci.go:103] Successfully created a docker volume kubernetes-upgrade-205321
	I1025 20:53:52.595382   11211 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 20:53:52.723694   11211 cli_runner.go:211] docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 20:53:52.723754   11211 client.go:171] LocalClient.Create took 566.908286ms
	I1025 20:53:54.724802   11211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:53:54.724888   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:54.785234   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:54.785337   11211 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:54.985954   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:55.050904   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:55.051014   11211 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:55.493487   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:55.556474   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:55.556566   11211 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:55.963219   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:56.024837   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:56.024923   11211 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:56.620624   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:56.683026   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:56.683120   11211 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:56.683145   11211 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:56.683189   11211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:53:56.683271   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:56.742237   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:56.742317   11211 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:57.010312   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:57.072591   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:57.072685   11211 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:57.583861   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:57.646440   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:57.646544   11211 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.094933   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.157399   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:58.157493   11211 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:58.157519   11211 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.157534   11211 start.go:128] duration metric: createHost completed in 6.023473332s
	I1025 20:53:58.157594   11211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:53:58.157641   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.217195   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:58.217285   11211 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.532819   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.598817   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:58.598899   11211 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.866274   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.930595   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:58.930681   11211 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:59.701034   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:59.764130   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:59.764223   11211 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:59.764249   11211 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:59.764298   11211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:53:59.764365   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:59.823618   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:59.823696   11211 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.079903   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:54:00.142257   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:54:00.142348   11211 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.340829   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:54:00.404964   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:54:00.405050   11211 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.775714   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:54:00.838448   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:54:00.838542   11211 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:54:00.838570   11211 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.838579   11211 fix.go:57] fixHost completed within 26.666317325s
	I1025 20:54:00.838586   11211 start.go:83] releasing machines lock for "kubernetes-upgrade-205321", held for 26.666342547s
	W1025 20:54:00.838725   11211 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-205321" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-205321" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 20:54:00.882171   11211 out.go:177] 
	W1025 20:54:00.903423   11211 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 20:54:00.903455   11211 out.go:239] * 
	* 
	W1025 20:54:00.904586   11211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:54:00.988172   11211 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-205321 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-205321
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-205321: exit status 82 (14.648219879s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-205321"  ...
	* Stopping node "kubernetes-upgrade-205321"  ...
	* Stopping node "kubernetes-upgrade-205321"  ...
	* Stopping node "kubernetes-upgrade-205321"  ...
	* Stopping node "kubernetes-upgrade-205321"  ...
	* Stopping node "kubernetes-upgrade-205321"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-205321 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-205321 failed: exit status 82
panic.go:522: *** TestKubernetesUpgrade FAILED at 2022-10-25 20:54:15.671186 -0700 PDT m=+2213.310466267
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-205321
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-205321:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-205321",
	        "Id": "86b19f2b1e4882a69cd71e8f704c13f2d451297d262d37a320342efb90cf0df5",
	        "Created": "2022-10-26T03:53:52.438337875Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "kubernetes-upgrade-205321"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-205321 -n kubernetes-upgrade-205321
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-205321 -n kubernetes-upgrade-205321: exit status 7 (111.320555ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:54:15.847143   11637 status.go:249] status error: host: state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-205321" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-205321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-205321
--- FAIL: TestKubernetesUpgrade (55.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (202.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1695622508.exe start -p missing-upgrade-205231 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1695622508.exe start -p missing-upgrade-205231 --memory=2200 --driver=docker : exit status 78 (54.085308285s)

                                                
                                                
-- stdout --
	! [missing-upgrade-205231] minikube v1.9.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-205231
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-205231" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.27.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 14.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 57.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 74.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 249.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 265.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 297.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 311.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 431.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 510.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 526.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-26 03:53:07.220982995 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-205231" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-26 03:53:24.407168882 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1695622508.exe start -p missing-upgrade-205231 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1695622508.exe start -p missing-upgrade-205231 --memory=2200 --driver=docker : exit status 70 (15.467463259s)

                                                
                                                
-- stdout --
	* [missing-upgrade-205231] minikube v1.9.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-205231
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Updating the running docker "missing-upgrade-205231" container ...
	* Updating the running docker "missing-upgrade-205231" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.97 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 31.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 52.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 74.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 156.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 218.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 259.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 375.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 396.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 417.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 439.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 509.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-205231" may fix it.: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1695622508.exe start -p missing-upgrade-205231 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1695622508.exe start -p missing-upgrade-205231 --memory=2200 --driver=docker : exit status 70 (8.369622493s)

                                                
                                                
-- stdout --
	* [missing-upgrade-205231] minikube v1.9.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-205231
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-205231" container ...
	* Updating the running docker "missing-upgrade-205231" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-205231" may fix it.: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2022-10-25 20:53:53.870246 -0700 PDT m=+2191.509539522
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-205231
helpers_test.go:235: (dbg) docker inspect missing-upgrade-205231:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7",
	        "Created": "2022-10-26T03:53:15.419648372Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 132096,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-26T03:53:15.642330478Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7/hosts",
	        "LogPath": "/var/lib/docker/containers/1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7/1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7-json.log",
	        "Name": "/missing-upgrade-205231",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-205231:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/28821a0c12e62ae056a4c0373c25215868ccae1471afb2cdec89bcaa4406dc85-init/diff:/var/lib/docker/overlay2/36916057cbc1f2306b34625139cd40ed853449182b5da3224843a94be0fd11d9/diff:/var/lib/docker/overlay2/398727bf70fe3afa9b28db71f8af53d993dc4735cdc5b8f651b79361732e5251/diff:/var/lib/docker/overlay2/d7b22dbee788e7f1a64ab8ceae2d2a034207a6dc22f677c029b9ca9bfdbdb53c/diff:/var/lib/docker/overlay2/39c39c3bc965f83f87c0350d26c11451c30ac39f168aed5590c2ad0eb65dbd52/diff:/var/lib/docker/overlay2/98eab6f4edb27e3de3dc6ba28aeb912a58ce59a25a3fb89d3e5209d0f7b9a2c1/diff:/var/lib/docker/overlay2/58838845f7ffc03ae12adc13913179dfd461c9322d8cf7014ed62111fefbad5b/diff:/var/lib/docker/overlay2/07fba395b9e91304a8169f93d3b1a5593549e5421815d443d7ae1cac5cd255b0/diff:/var/lib/docker/overlay2/e84d849dc7a51d218c9525b76f525ae2659d62f90a6c42ad487031cfeb4b100d/diff:/var/lib/docker/overlay2/2ce15a4e33aa1f2db4bf73b234b76b3181dce47f76962cc66d11d1c28db5ef2c/diff:/var/lib/docker/overlay2/2711bc
9b96ed4b592005d0acaff1a65535c3a4683465f9560b45598858faa2b0/diff:/var/lib/docker/overlay2/3a205b9af960cc90731ce14841354bed427a26295263d8c705ebc3ea36c15197/diff:/var/lib/docker/overlay2/0cf0d625eede3b724562fdebe8023706013367b9e9458199e2b79d9e3cc3c6e4/diff:/var/lib/docker/overlay2/9c8ea4d816e132e6924410a3745de9cd12a523acdcc0a9a0a288bd5ac2178665/diff:/var/lib/docker/overlay2/9d4e2219cbee899c462008264c21d0bb6b00bcf18fde87451e100c3e3a27a597/diff:/var/lib/docker/overlay2/c973f91c31577cdae1af99537e09b9acb638714dbfb8c66e86ad4bf40e7c60c0/diff:/var/lib/docker/overlay2/8a7f48e3e185b509c767ee53fd52a78b596135bc32c0d7f3b9cffb921bb9f32b/diff:/var/lib/docker/overlay2/4c2378b59acca95153176610a97194c2da7389d4fb70b42f7db26394bc65e67e/diff:/var/lib/docker/overlay2/426fa7925c57dd7d59cf3154cd982a06f39959a9f60e060a907df83a2fdd9f3f/diff:/var/lib/docker/overlay2/43201db72043ffc68f4cee0bec0d673a058f98faf05dfbc6f13c3432ac8358f5/diff:/var/lib/docker/overlay2/bcc588c2679bbfa17723e08b01afd6175b0fc589942d239247caaeb978997f93/diff:/var/lib/d
ocker/overlay2/9431d58668d66cdda479f66519006cf2b64a1caebfb5da7d4e2b32c1711560b4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28821a0c12e62ae056a4c0373c25215868ccae1471afb2cdec89bcaa4406dc85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28821a0c12e62ae056a4c0373c25215868ccae1471afb2cdec89bcaa4406dc85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28821a0c12e62ae056a4c0373c25215868ccae1471afb2cdec89bcaa4406dc85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-205231",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-205231/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-205231",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-205231",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-205231",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a28696527c2b00f9d803860badb60f85b602e6ad687352b3f80d004cd5a21e1f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51886"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a28696527c2b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "4de21e8698916a54c3a3f0a5b92c1788b622bddc2ced9ab806ce8527a5184576",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "38b0fad705775dc2308169ed795c185541c7cdc32abfbcb72d9500391b505c7f",
	                    "EndpointID": "4de21e8698916a54c3a3f0a5b92c1788b622bddc2ced9ab806ce8527a5184576",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-205231 -n missing-upgrade-205231
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-205231 -n missing-upgrade-205231: exit status 6 (404.186801ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:53:54.327851   11509 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-205231" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-205231" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-205231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-205231

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-darwin-amd64 delete -p missing-upgrade-205231: signal: killed (2m0.002844876s)

                                                
                                                
-- stdout --
	* Deleting "missing-upgrade-205231" in docker ...
	* Deleting container "missing-upgrade-205231" ...
	* Stopping node "missing-upgrade-205231"  ...
	* Powering off "missing-upgrade-205231" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:54:07.886695   11519 delete.go:56] error deleting container "missing-upgrade-205231". You may want to delete it manually :
	delete missing-upgrade-205231: docker rm -f -v missing-upgrade-205231: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Could not kill running container 1c202d375ea3c1870634ae4a8d0d1de9703b7fb86d61da0d96a7c838049515d7, cannot remove - tried to kill container, but did not receive an exit event

                                                
                                                
** /stderr **
helpers_test.go:180: failed cleanup: signal: killed
--- FAIL: TestMissingContainerUpgrade (202.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1568.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4231034098.exe start -p stopped-upgrade-205416 --memory=2200 --vm-driver=docker 
E1025 20:55:12.436402    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:55:12.971144    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4231034098.exe start -p stopped-upgrade-205416 --memory=2200 --vm-driver=docker : exit status 70 (3m28.473264829s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-205416] minikube v1.9.0 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3932466942
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 93658a3023f121e44d9867cf3fee6a424e5f458ff6c35156f09d2214214fdbe7
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "stopped-upgrade-205416" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 647ddea1f60be3b40b821fb86a3b2171821e85d13104959ddd3f3a8db81bce6f
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p stopped-upgrade-205416", then "minikube start -p stopped-upgrade-205416 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 14.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.73 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 103.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 278.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 343.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 647ddea1f60be3b40b821fb86a3b2171821e85d13104959ddd3f3a8db81bce6f
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4231034098.exe start -p stopped-upgrade-205416 --memory=2200 --vm-driver=docker 
E1025 20:58:15.486246    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:58:26.063312    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 20:59:47.983660    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 21:00:12.436453    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:00:12.971777    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 21:02:04.127373    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 21:02:31.826211    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 21:03:16.030713    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4231034098.exe start -p stopped-upgrade-205416 --memory=2200 --vm-driver=docker : exit status 70 (9m43.89339407s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-205416] minikube v1.9.0 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig287409148
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-205416" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: cf6da56fc854c2644e982e7feeceffce2571b692687bd4698e507bd438ad3c6a
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "stopped-upgrade-205416" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 02e8bfd89536c91dd971522a52b4cb65d6c1ffc979d748ba980a012697d1cfe2
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p stopped-upgrade-205416", then "minikube start -p stopped-upgrade-205416 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 02e8bfd89536c91dd971522a52b4cb65d6c1ffc979d748ba980a012697d1cfe2
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4231034098.exe start -p stopped-upgrade-205416 --memory=2200 --vm-driver=docker 
E1025 21:10:12.455817    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:10:12.990835    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 21:12:04.148031    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 21:13:27.208081    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 21:14:55.506646    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:15:12.456429    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:15:12.991709    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 21:17:04.148685    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4231034098.exe start -p stopped-upgrade-205416 --memory=2200 --vm-driver=docker : exit status 70 (12m54.124319436s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-205416] minikube v1.9.0 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3116505027
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-205416" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 271a6acba3dbe037fad74a147650e97dcac8de8cd599c96f2752c8bd36170b6b
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "stopped-upgrade-205416" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: ec654e212c1b6934ed7948d7d09a8452407ff96dfe3f99adfbbdb72a94fc98e7
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p stopped-upgrade-205416", then "minikube start -p stopped-upgrade-205416 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-205416 --name stopped-upgrade-205416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-205416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-205416 --volume stopped-upgrade-205416:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: ec654e212c1b6934ed7948d7d09a8452407ff96dfe3f99adfbbdb72a94fc98e7
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1568.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-205416
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p stopped-upgrade-205416: exit status 85 (467.718478ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-----------------------------------------------------------------------------------------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	|  Command   |                                                            Args                                                             |           Profile           |   User   | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| ssh        | multinode-203818 ssh -n                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m02 sudo cat                                                                                               |                             |          |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                                                    |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n multinode-203818 sudo cat                                                                           | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | /home/docker/cp-test_multinode-203818-m02_multinode-203818.txt                                                              |                             |          |         |                     |                     |
	| cp         | multinode-203818 cp multinode-203818-m02:/home/docker/cp-test.txt                                                           | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m03:/home/docker/cp-test_multinode-203818-m02_multinode-203818-m03.txt                                     |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m02 sudo cat                                                                                               |                             |          |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                                                    |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n multinode-203818-m03 sudo cat                                                                       | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | /home/docker/cp-test_multinode-203818-m02_multinode-203818-m03.txt                                                          |                             |          |         |                     |                     |
	| cp         | multinode-203818 cp testdata/cp-test.txt                                                                                    | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m03:/home/docker/cp-test.txt                                                                               |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m03 sudo cat                                                                                               |                             |          |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                                                    |                             |          |         |                     |                     |
	| cp         | multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt                                                           | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3396189668/001/cp-test_multinode-203818-m03.txt |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m03 sudo cat                                                                                               |                             |          |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                                                    |                             |          |         |                     |                     |
	| cp         | multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt                                                           | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818:/home/docker/cp-test_multinode-203818-m03_multinode-203818.txt                                             |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m03 sudo cat                                                                                               |                             |          |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                                                    |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n multinode-203818 sudo cat                                                                           | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | /home/docker/cp-test_multinode-203818-m03_multinode-203818.txt                                                              |                             |          |         |                     |                     |
	| cp         | multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt                                                           | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m02:/home/docker/cp-test_multinode-203818-m03_multinode-203818-m02.txt                                     |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | multinode-203818-m03 sudo cat                                                                                               |                             |          |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                                                    |                             |          |         |                     |                     |
	| ssh        | multinode-203818 ssh -n multinode-203818-m02 sudo cat                                                                       | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	|            | /home/docker/cp-test_multinode-203818-m03_multinode-203818-m02.txt                                                          |                             |          |         |                     |                     |
	| node       | multinode-203818 node stop m03                                                                                              | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:40 PDT |
	| node       | multinode-203818 node start                                                                                                 | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:40 PDT | 25 Oct 22 20:41 PDT |
	|            | m03 --alsologtostderr                                                                                                       |                             |          |         |                     |                     |
	| node       | list -p multinode-203818                                                                                                    | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:41 PDT |                     |
	| stop       | -p multinode-203818                                                                                                         | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:41 PDT | 25 Oct 22 20:41 PDT |
	| start      | -p multinode-203818                                                                                                         | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:41 PDT | 25 Oct 22 20:42 PDT |
	|            | --wait=true -v=8                                                                                                            |                             |          |         |                     |                     |
	|            | --alsologtostderr                                                                                                           |                             |          |         |                     |                     |
	| node       | list -p multinode-203818                                                                                                    | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:42 PDT |                     |
	| node       | multinode-203818 node delete                                                                                                | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:42 PDT | 25 Oct 22 20:43 PDT |
	|            | m03                                                                                                                         |                             |          |         |                     |                     |
	| stop       | multinode-203818 stop                                                                                                       | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:43 PDT | 25 Oct 22 20:43 PDT |
	| start      | -p multinode-203818                                                                                                         | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:43 PDT |                     |
	|            | --wait=true -v=8                                                                                                            |                             |          |         |                     |                     |
	|            | --alsologtostderr                                                                                                           |                             |          |         |                     |                     |
	|            | --driver=docker                                                                                                             |                             |          |         |                     |                     |
	| node       | list -p multinode-203818                                                                                                    | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:46 PDT |                     |
	| start      | -p multinode-203818-m02                                                                                                     | multinode-203818-m02        | jenkins  | v1.27.1 | 25 Oct 22 20:46 PDT |                     |
	|            | --driver=docker                                                                                                             |                             |          |         |                     |                     |
	| start      | -p multinode-203818-m03                                                                                                     | multinode-203818-m03        | jenkins  | v1.27.1 | 25 Oct 22 20:46 PDT | 25 Oct 22 20:47 PDT |
	|            | --driver=docker                                                                                                             |                             |          |         |                     |                     |
	| node       | add -p multinode-203818                                                                                                     | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:47 PDT |                     |
	| delete     | -p multinode-203818-m03                                                                                                     | multinode-203818-m03        | jenkins  | v1.27.1 | 25 Oct 22 20:47 PDT | 25 Oct 22 20:47 PDT |
	| delete     | -p multinode-203818                                                                                                         | multinode-203818            | jenkins  | v1.27.1 | 25 Oct 22 20:47 PDT | 25 Oct 22 20:47 PDT |
	| start      | -p test-preload-204722                                                                                                      | test-preload-204722         | jenkins  | v1.27.1 | 25 Oct 22 20:47 PDT | 25 Oct 22 20:48 PDT |
	|            | --memory=2200                                                                                                               |                             |          |         |                     |                     |
	|            | --alsologtostderr                                                                                                           |                             |          |         |                     |                     |
	|            | --wait=true --preload=false                                                                                                 |                             |          |         |                     |                     |
	|            | --driver=docker                                                                                                             |                             |          |         |                     |                     |
	|            | --kubernetes-version=v1.24.4                                                                                                |                             |          |         |                     |                     |
	| ssh        | -p test-preload-204722                                                                                                      | test-preload-204722         | jenkins  | v1.27.1 | 25 Oct 22 20:48 PDT | 25 Oct 22 20:48 PDT |
	|            | -- docker pull                                                                                                              |                             |          |         |                     |                     |
	|            | gcr.io/k8s-minikube/busybox                                                                                                 |                             |          |         |                     |                     |
	| start      | -p test-preload-204722                                                                                                      | test-preload-204722         | jenkins  | v1.27.1 | 25 Oct 22 20:48 PDT | 25 Oct 22 20:49 PDT |
	|            | --memory=2200                                                                                                               |                             |          |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                                                      |                             |          |         |                     |                     |
	|            | --wait=true --driver=docker                                                                                                 |                             |          |         |                     |                     |
	|            | --kubernetes-version=v1.24.6                                                                                                |                             |          |         |                     |                     |
	| ssh        | -p test-preload-204722 --                                                                                                   | test-preload-204722         | jenkins  | v1.27.1 | 25 Oct 22 20:49 PDT | 25 Oct 22 20:49 PDT |
	|            | docker images                                                                                                               |                             |          |         |                     |                     |
	| delete     | -p test-preload-204722                                                                                                      | test-preload-204722         | jenkins  | v1.27.1 | 25 Oct 22 20:49 PDT | 25 Oct 22 20:49 PDT |
	| start      | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:49 PDT | 25 Oct 22 20:50 PDT |
	|            | --memory=2048 --driver=docker                                                                                               |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 5m                                                                                                               |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 5m                                                                                                               |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 5m                                                                                                               |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 15s                                                                                                              |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 15s                                                                                                              |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 15s                                                                                                              |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT | 25 Oct 22 20:50 PDT |
	|            | --cancel-scheduled                                                                                                          |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 15s                                                                                                              |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT |                     |
	|            | --schedule 15s                                                                                                              |                             |          |         |                     |                     |
	| stop       | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:50 PDT | 25 Oct 22 20:51 PDT |
	|            | --schedule 15s                                                                                                              |                             |          |         |                     |                     |
	| delete     | -p scheduled-stop-204939                                                                                                    | scheduled-stop-204939       | jenkins  | v1.27.1 | 25 Oct 22 20:51 PDT | 25 Oct 22 20:51 PDT |
	| start      | -p skaffold-205120                                                                                                          | skaffold-205120             | jenkins  | v1.27.1 | 25 Oct 22 20:51 PDT | 25 Oct 22 20:51 PDT |
	|            | --memory=2600 --driver=docker                                                                                               |                             |          |         |                     |                     |
	| docker-env | --shell none -p                                                                                                             | skaffold-205120             | skaffold | v1.27.1 | 25 Oct 22 20:51 PDT | 25 Oct 22 20:51 PDT |
	|            | skaffold-205120                                                                                                             |                             |          |         |                     |                     |
	|            | --user=skaffold                                                                                                             |                             |          |         |                     |                     |
	| delete     | -p skaffold-205120                                                                                                          | skaffold-205120             | jenkins  | v1.27.1 | 25 Oct 22 20:52 PDT | 25 Oct 22 20:52 PDT |
	| start      | -p insufficient-storage-205217                                                                                              | insufficient-storage-205217 | jenkins  | v1.27.1 | 25 Oct 22 20:52 PDT |                     |
	|            | --memory=2048 --output=json                                                                                                 |                             |          |         |                     |                     |
	|            | --wait=true --driver=docker                                                                                                 |                             |          |         |                     |                     |
	| delete     | -p insufficient-storage-205217                                                                                              | insufficient-storage-205217 | jenkins  | v1.27.1 | 25 Oct 22 20:52 PDT | 25 Oct 22 20:52 PDT |
	| start      | -p offline-docker-205230                                                                                                    | offline-docker-205230       | jenkins  | v1.27.1 | 25 Oct 22 20:52 PDT | 25 Oct 22 20:53 PDT |
	|            | --alsologtostderr -v=1                                                                                                      |                             |          |         |                     |                     |
	|            | --memory=2048 --wait=true                                                                                                   |                             |          |         |                     |                     |
	|            | --driver=docker                                                                                                             |                             |          |         |                     |                     |
	| delete     | -p flannel-205230                                                                                                           | flannel-205230              | jenkins  | v1.27.1 | 25 Oct 22 20:52 PDT | 25 Oct 22 20:52 PDT |
	| delete     | -p custom-flannel-205231                                                                                                    | custom-flannel-205231       | jenkins  | v1.27.1 | 25 Oct 22 20:52 PDT | 25 Oct 22 20:52 PDT |
	| delete     | -p offline-docker-205230                                                                                                    | offline-docker-205230       | jenkins  | v1.27.1 | 25 Oct 22 20:53 PDT | 25 Oct 22 20:53 PDT |
	| start      | -p kubernetes-upgrade-205321                                                                                                | kubernetes-upgrade-205321   | jenkins  | v1.27.1 | 25 Oct 22 20:53 PDT |                     |
	|            | --memory=2200                                                                                                               |                             |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0                                                                                                |                             |          |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                                                      |                             |          |         |                     |                     |
	|            | --driver=docker                                                                                                             |                             |          |         |                     |                     |
	| delete     | -p missing-upgrade-205231                                                                                                   | missing-upgrade-205231      | jenkins  | v1.27.1 | 25 Oct 22 20:53 PDT |                     |
	| stop       | -p kubernetes-upgrade-205321                                                                                                | kubernetes-upgrade-205321   | jenkins  | v1.27.1 | 25 Oct 22 20:54 PDT |                     |
	| delete     | -p kubernetes-upgrade-205321                                                                                                | kubernetes-upgrade-205321   | jenkins  | v1.27.1 | 25 Oct 22 20:54 PDT | 25 Oct 22 20:54 PDT |
	|------------|-----------------------------------------------------------------------------------------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 20:53:21
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 20:53:21.322758   11211 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:53:21.322893   11211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:53:21.322898   11211 out.go:309] Setting ErrFile to fd 2...
	I1025 20:53:21.322901   11211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:53:21.323022   11211 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:53:21.323533   11211 out.go:303] Setting JSON to false
	I1025 20:53:21.341345   11211 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3170,"bootTime":1666753231,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:53:21.341442   11211 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:53:21.364012   11211 out.go:177] * [kubernetes-upgrade-205321] minikube v1.27.1 on Darwin 12.6
	I1025 20:53:21.405458   11211 notify.go:220] Checking for updates...
	I1025 20:53:21.426669   11211 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:53:21.447663   11211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:53:21.468489   11211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:53:21.489706   11211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:53:21.510708   11211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:53:21.531949   11211 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 20:53:21.532006   11211 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:53:21.617077   11211 docker.go:137] docker version: linux-20.10.17
	I1025 20:53:21.617235   11211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:53:21.774186   11211 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:62 SystemTime:2022-10-26 03:53:21.692494531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:53:21.796157   11211 out.go:177] * Using the docker driver based on user configuration
	I1025 20:53:21.816834   11211 start.go:282] selected driver: docker
	I1025 20:53:21.816848   11211 start.go:808] validating driver "docker" against <nil>
	I1025 20:53:21.816863   11211 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:53:21.819641   11211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:53:21.972560   11211 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:62 SystemTime:2022-10-26 03:53:21.900075278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:53:21.972674   11211 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 20:53:21.972852   11211 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 20:53:21.994454   11211 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 20:53:22.015490   11211 cni.go:95] Creating CNI manager for ""
	I1025 20:53:22.015510   11211 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 20:53:22.015530   11211 start_flags.go:317] config:
	{Name:kubernetes-upgrade-205321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-205321 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:53:22.037375   11211 out.go:177] * Starting control plane node kubernetes-upgrade-205321 in cluster kubernetes-upgrade-205321
	I1025 20:53:22.079336   11211 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:53:22.100349   11211 out.go:177] * Pulling base image ...
	I1025 20:53:22.142312   11211 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 20:53:22.142344   11211 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:53:22.142377   11211 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 20:53:22.142392   11211 cache.go:57] Caching tarball of preloaded images
	I1025 20:53:22.142522   11211 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 20:53:22.142534   11211 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 20:53:22.143047   11211 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kubernetes-upgrade-205321/config.json ...
	I1025 20:53:22.143114   11211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kubernetes-upgrade-205321/config.json: {Name:mkdefdd19347dc2026e53dc2eb3899fb0303b953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:53:22.212555   11211 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 20:53:22.212586   11211 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 20:53:22.212596   11211 cache.go:208] Successfully downloaded all kic artifacts
	I1025 20:53:22.212647   11211 start.go:364] acquiring machines lock for kubernetes-upgrade-205321: {Name:mk08e6c3268915fe3a30b8582f01f341447ad995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:53:22.212846   11211 start.go:368] acquired machines lock for "kubernetes-upgrade-205321" in 185.786µs
	I1025 20:53:22.212872   11211 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-205321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-205321 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 20:53:22.212942   11211 start.go:125] createHost starting for "" (driver="docker")
	I1025 20:53:22.255212   11211 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 20:53:22.255409   11211 start.go:159] libmachine.API.Create for "kubernetes-upgrade-205321" (driver="docker")
	I1025 20:53:22.255439   11211 client.go:168] LocalClient.Create starting
	I1025 20:53:22.255514   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 20:53:22.255550   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:22.255567   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:22.255626   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 20:53:22.255648   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:22.255656   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:22.256114   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 20:53:22.326406   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 20:53:22.326541   11211 network_create.go:272] running [docker network inspect kubernetes-upgrade-205321] to gather additional debugging logs...
	I1025 20:53:22.326564   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321
	W1025 20:53:22.397979   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:22.398018   11211 network_create.go:275] error running [docker network inspect kubernetes-upgrade-205321]: docker network inspect kubernetes-upgrade-205321: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-205321
	I1025 20:53:22.398039   11211 network_create.go:277] output of [docker network inspect kubernetes-upgrade-205321]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-205321
	
	** /stderr **
	I1025 20:53:22.398135   11211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:53:22.467657   11211 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000122330] misses:0}
	I1025 20:53:22.467712   11211 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.467739   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 20:53:22.467902   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	W1025 20:53:22.539587   11211 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:22.539622   11211 network_create.go:107] failed to create docker network kubernetes-upgrade-205321 192.168.49.0/24, will retry: subnet is taken
	I1025 20:53:22.540075   11211 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:false}} dirty:map[] misses:0}
	I1025 20:53:22.540092   11211 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.540295   11211 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8] misses:0}
	I1025 20:53:22.540308   11211 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.540323   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 20:53:22.540400   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	W1025 20:53:22.612915   11211 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:22.612977   11211 network_create.go:107] failed to create docker network kubernetes-upgrade-205321 192.168.58.0/24, will retry: subnet is taken
	I1025 20:53:22.613285   11211 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8] misses:1}
	I1025 20:53:22.613302   11211 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.613524   11211 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] misses:1}
	I1025 20:53:22.613540   11211 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:22.613555   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 20:53:22.613655   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	I1025 20:53:22.798653   11211 network_create.go:99] docker network kubernetes-upgrade-205321 192.168.67.0/24 created
	I1025 20:53:22.798687   11211 kic.go:106] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-205321" container
	I1025 20:53:22.798783   11211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 20:53:22.864367   11211 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-205321 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --label created_by.minikube.sigs.k8s.io=true
	I1025 20:53:23.237140   11211 oci.go:103] Successfully created a docker volume kubernetes-upgrade-205321
	I1025 20:53:23.237280   11211 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 20:53:23.457492   11211 cli_runner.go:211] docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 20:53:23.457541   11211 client.go:171] LocalClient.Create took 1.202093531s
	I1025 20:53:25.459928   11211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:53:25.460062   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:25.521157   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:25.521254   11211 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:25.799803   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:25.864051   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:25.864142   11211 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:26.406653   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:26.471293   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:26.471377   11211 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.128886   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:27.193035   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:27.193119   11211 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:27.193142   11211 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.193192   11211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:53:27.193252   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:27.254530   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:27.254605   11211 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.486737   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:27.551906   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:27.551991   11211 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:27.997846   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:28.086959   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:28.087040   11211 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:28.406840   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:28.494195   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:28.494279   11211 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:29.048607   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:29.109496   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:29.109581   11211 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:29.109595   11211 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:29.109607   11211 start.go:128] duration metric: createHost completed in 6.896656149s
	I1025 20:53:29.109615   11211 start.go:83] releasing machines lock for "kubernetes-upgrade-205321", held for 6.896758025s
	W1025 20:53:29.109629   11211 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 20:53:29.110015   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:29.171812   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:29.171853   11211 delete.go:82] Unable to get host status for kubernetes-upgrade-205321, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	W1025 20:53:29.171975   11211 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 20:53:29.171984   11211 start.go:618] Will try again in 5 seconds ...
	I1025 20:53:34.172114   11211 start.go:364] acquiring machines lock for kubernetes-upgrade-205321: {Name:mk08e6c3268915fe3a30b8582f01f341447ad995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 20:53:34.172220   11211 start.go:368] acquired machines lock for "kubernetes-upgrade-205321" in 84.348µs
	I1025 20:53:34.172238   11211 start.go:96] Skipping create...Using existing machine configuration
	I1025 20:53:34.172246   11211 fix.go:55] fixHost starting: 
	I1025 20:53:34.172442   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.237102   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:34.237153   11211 fix.go:103] recreateIfNeeded on kubernetes-upgrade-205321: state= err=unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.237191   11211 fix.go:108] machineExists: false. err=machine does not exist
	I1025 20:53:34.268859   11211 out.go:177] * docker "kubernetes-upgrade-205321" container is missing, will recreate.
	I1025 20:53:34.310928   11211 delete.go:124] DEMOLISHING kubernetes-upgrade-205321 ...
	I1025 20:53:34.311098   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.372164   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	W1025 20:53:34.372214   11211 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.372238   11211 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.372593   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.433566   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:34.433627   11211 delete.go:82] Unable to get host status for kubernetes-upgrade-205321, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.433715   11211 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-205321
	W1025 20:53:34.494504   11211 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:34.494535   11211 kic.go:356] could not find the container kubernetes-upgrade-205321 to remove it. will try anyways
	I1025 20:53:34.494608   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:34.555572   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	W1025 20:53:34.555615   11211 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:34.555699   11211 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-205321 /bin/bash -c "sudo init 0"
	W1025 20:53:34.618081   11211 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-205321 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 20:53:34.618125   11211 oci.go:646] error shutdown kubernetes-upgrade-205321: docker exec --privileged -t kubernetes-upgrade-205321 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:35.618397   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:35.682205   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:35.682269   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:35.682281   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:35.682303   11211 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.083616   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:36.147998   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:36.148054   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.148064   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:36.148083   11211 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.911840   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:36.973493   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:36.973542   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:36.973551   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:36.973571   11211 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:38.451982   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:38.518066   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:38.518119   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:38.518131   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:38.518151   11211 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:39.723890   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:39.785406   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:39.785457   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:39.785467   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:39.785489   11211 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:42.014944   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:42.077318   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:42.077376   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:42.077393   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:42.077414   11211 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:45.184314   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:45.248279   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:45.248329   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:45.248339   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:45.248358   11211 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:50.768809   11211 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}
	W1025 20:53:50.836208   11211 cli_runner.go:211] docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}} returned with exit code 1
	I1025 20:53:50.836255   11211 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:50.836267   11211 oci.go:660] temporary error: container kubernetes-upgrade-205321 status is  but expect it to be exited
	I1025 20:53:50.836294   11211 oci.go:88] couldn't shut down kubernetes-upgrade-205321 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-205321": docker container inspect kubernetes-upgrade-205321 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	 
	I1025 20:53:50.836364   11211 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-205321
	I1025 20:53:50.898984   11211 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-205321
	W1025 20:53:50.958326   11211 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:50.958444   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:53:51.018871   11211 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-205321
	W1025 20:53:51.131889   11211 delete.go:139] delete failed (probably ok) <nil>
	I1025 20:53:51.131907   11211 fix.go:115] Sleeping 1 second for extra luck!
	I1025 20:53:52.134019   11211 start.go:125] createHost starting for "" (driver="docker")
	I1025 20:53:52.156610   11211 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 20:53:52.156793   11211 start.go:159] libmachine.API.Create for "kubernetes-upgrade-205321" (driver="docker")
	I1025 20:53:52.156837   11211 client.go:168] LocalClient.Create starting
	I1025 20:53:52.157075   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 20:53:52.157161   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:52.157186   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:52.157268   11211 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 20:53:52.157336   11211 main.go:134] libmachine: Decoding PEM data...
	I1025 20:53:52.157354   11211 main.go:134] libmachine: Parsing certificate...
	I1025 20:53:52.178835   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 20:53:52.244431   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 20:53:52.244510   11211 network_create.go:272] running [docker network inspect kubernetes-upgrade-205321] to gather additional debugging logs...
	I1025 20:53:52.244535   11211 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205321
	W1025 20:53:52.305441   11211 cli_runner.go:211] docker network inspect kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:52.305461   11211 network_create.go:275] error running [docker network inspect kubernetes-upgrade-205321]: docker network inspect kubernetes-upgrade-205321: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-205321
	I1025 20:53:52.305477   11211 network_create.go:277] output of [docker network inspect kubernetes-upgrade-205321]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-205321
	
	** /stderr **
	I1025 20:53:52.305564   11211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 20:53:52.367141   11211 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] misses:1}
	I1025 20:53:52.367170   11211 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367400   11211 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] misses:2}
	I1025 20:53:52.367412   11211 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367621   11211 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] amended:false}} dirty:map[] misses:0}
	I1025 20:53:52.367630   11211 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367834   11211 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8] amended:true}} dirty:map[192.168.49.0:0xc000122330 192.168.58.0:0xc00052c6f8 192.168.67.0:0xc0008522e8 192.168.76.0:0xc000b22588] misses:0}
	I1025 20:53:52.367847   11211 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 20:53:52.367854   11211 network_create.go:115] attempt to create docker network kubernetes-upgrade-205321 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 20:53:52.367924   11211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 kubernetes-upgrade-205321
	I1025 20:53:52.465378   11211 network_create.go:99] docker network kubernetes-upgrade-205321 192.168.76.0/24 created
	I1025 20:53:52.465408   11211 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-205321" container
	I1025 20:53:52.465517   11211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 20:53:52.533696   11211 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-205321 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --label created_by.minikube.sigs.k8s.io=true
	I1025 20:53:52.595267   11211 oci.go:103] Successfully created a docker volume kubernetes-upgrade-205321
	I1025 20:53:52.595382   11211 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 20:53:52.723694   11211 cli_runner.go:211] docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 20:53:52.723754   11211 client.go:171] LocalClient.Create took 566.908286ms
	I1025 20:53:54.724802   11211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:53:54.724888   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:54.785234   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:54.785337   11211 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:54.985954   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:55.050904   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:55.051014   11211 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:55.493487   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:55.556474   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:55.556566   11211 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:55.963219   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:56.024837   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:56.024923   11211 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:56.620624   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:56.683026   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:56.683120   11211 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:56.683145   11211 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:56.683189   11211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:53:56.683271   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:56.742237   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:56.742317   11211 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:57.010312   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:57.072591   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:57.072685   11211 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:57.583861   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:57.646440   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:57.646544   11211 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.094933   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.157399   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:58.157493   11211 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:58.157519   11211 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.157534   11211 start.go:128] duration metric: createHost completed in 6.023473332s
	I1025 20:53:58.157594   11211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:53:58.157641   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.217195   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:58.217285   11211 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.532819   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.598817   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:58.598899   11211 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:58.866274   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:58.930595   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:58.930681   11211 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:59.701034   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:59.764130   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:53:59.764223   11211 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:53:59.764249   11211 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:53:59.764298   11211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 20:53:59.764365   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:53:59.823618   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:53:59.823696   11211 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.079903   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:54:00.142257   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:54:00.142348   11211 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.340829   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:54:00.404964   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	I1025 20:54:00.405050   11211 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.775714   11211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321
	W1025 20:54:00.838448   11211 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321 returned with exit code 1
	W1025 20:54:00.838542   11211 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	
	W1025 20:54:00.838570   11211 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-205321": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205321: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-205321
	I1025 20:54:00.838579   11211 fix.go:57] fixHost completed within 26.666317325s
	I1025 20:54:00.838586   11211 start.go:83] releasing machines lock for "kubernetes-upgrade-205321", held for 26.666342547s
	W1025 20:54:00.838725   11211 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-205321" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 20:54:00.882171   11211 out.go:177] 
	W1025 20:54:00.903423   11211 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-205321 container: docker run --rm --name kubernetes-upgrade-205321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-205321 --entrypoint /usr/bin/test -v kubernetes-upgrade-205321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 20:54:00.903455   11211 out.go:239] * 
	W1025 20:54:00.904586   11211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 20:54:00.988172   11211 out.go:177] 
	
	* 
	* The control plane node "m01" does not exist.
	  To start a cluster, run: "minikube start -p stopped-upgrade-205416"

                                                
                                                
-- /stdout --
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.48s)

                                                
                                    
x
+
TestPause/serial/Start (39.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-212029 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-212029 --memory=2048 --install-addons=false --wait=all --driver=docker : exit status 80 (39.210443985s)

                                                
                                                
-- stdout --
	* [pause-212029] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node pause-212029 in cluster pause-212029
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-212029" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for pause-212029 container: docker run --rm --name pause-212029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-212029 --entrypoint /usr/bin/test -v pause-212029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p pause-212029" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for pause-212029 container: docker run --rm --name pause-212029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-212029 --entrypoint /usr/bin/test -v pause-212029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for pause-212029 container: docker run --rm --name pause-212029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-212029 --entrypoint /usr/bin/test -v pause-212029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-212029 --memory=2048 --install-addons=false --wait=all --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-212029
helpers_test.go:235: (dbg) docker inspect pause-212029:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-212029",
	        "Id": "999ba25bd905af1382ad8445a93387cbc89b18b6822f9aaca6f8dbdebd2f0bd9",
	        "Created": "2022-10-26T04:21:00.313655026Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-212029"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-212029 -n pause-212029
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-212029 -n pause-212029: exit status 7 (113.10276ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:21:09.081602   14925 status.go:249] status error: host: state: unknown state "pause-212029": docker container inspect pause-212029 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-212029

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-212029" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (39.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-212109 --driver=docker 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-212109 --driver=docker : exit status 80 (39.221762023s)

                                                
                                                
-- stdout --
	* [NoKubernetes-212109] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node NoKubernetes-212109 in cluster NoKubernetes-212109
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-212109" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-212109 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-212109
helpers_test.go:235: (dbg) docker inspect NoKubernetes-212109:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-212109",
	        "Id": "e10abbf2b92b34d3a03fa55e4b4b6611e8458f06e3042a256843003af7afa1ca",
	        "Created": "2022-10-26T04:21:40.891774428Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-212109"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109: exit status 7 (112.442661ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:21:49.665816   15160 status.go:249] status error: host: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-212109" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (39.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (61.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --driver=docker 
E1025 21:22:04.149096    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --driver=docker : exit status 80 (1m1.620617293s)

                                                
                                                
-- stdout --
	* [NoKubernetes-212109] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-212109
	* Pulling base image ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-212109" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-212109
helpers_test.go:235: (dbg) docker inspect NoKubernetes-212109:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-212109",
	        "Id": "96d8f4cd771607724f2470aafc6c02b35ff663abeb66e53bdf4f33e379249541",
	        "Created": "2022-10-26T04:22:42.343436832Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-212109"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109: exit status 7 (112.367743ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:22:51.466634   15449 status.go:249] status error: host: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-212109" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (61.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (61.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --driver=docker : exit status 80 (1m1.620332719s)

                                                
                                                
-- stdout --
	* [NoKubernetes-212109] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-212109
	* Pulling base image ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-212109" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-212109
helpers_test.go:235: (dbg) docker inspect NoKubernetes-212109:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-212109",
	        "Id": "df6b0d2e6cb8e29b53fba1c35a836016ec0a0a83c4da0d5906d9c5e1f67e8405",
	        "Created": "2022-10-26T04:23:44.090644808Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-212109"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109: exit status 7 (122.947297ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:23:53.277307   15778 status.go:249] status error: host: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-212109" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (61.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (14.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-212109
no_kubernetes_test.go:158: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p NoKubernetes-212109: exit status 82 (14.658488697s)

                                                
                                                
-- stdout --
	* Stopping node "NoKubernetes-212109"  ...
	* Stopping node "NoKubernetes-212109"  ...
	* Stopping node "NoKubernetes-212109"  ...
	* Stopping node "NoKubernetes-212109"  ...
	* Stopping node "NoKubernetes-212109"  ...
	* Stopping node "NoKubernetes-212109"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect NoKubernetes-212109 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:160: Failed to stop minikube "out/minikube-darwin-amd64 stop -p NoKubernetes-212109" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-212109
helpers_test.go:235: (dbg) docker inspect NoKubernetes-212109:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-212109",
	        "Id": "df6b0d2e6cb8e29b53fba1c35a836016ec0a0a83c4da0d5906d9c5e1f67e8405",
	        "Created": "2022-10-26T04:23:44.090644808Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-212109"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109: exit status 7 (113.475057ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:24:15.585612   15866 status.go:249] status error: host: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-212109" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Stop (14.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-212109 --driver=docker 
E1025 21:25:12.457934    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:25:12.994708    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-212109 --driver=docker : exit status 80 (1m0.978882734s)

                                                
                                                
-- stdout --
	* [NoKubernetes-212109] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-212109
	* Pulling base image ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-212109" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-212109" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-212109 container: docker run --rm --name NoKubernetes-212109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-212109 --entrypoint /usr/bin/test -v NoKubernetes-212109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-212109 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-212109
helpers_test.go:235: (dbg) docker inspect NoKubernetes-212109:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-212109",
	        "Id": "282800e36d7a74e563bfc71ab26dad7daf63b42905b15b5d285c252bb4b31c58",
	        "Created": "2022-10-26T04:25:08.257958531Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-212109"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-212109 -n NoKubernetes-212109: exit status 7 (112.95138ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:25:16.746718   16153 status.go:249] status error: host: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-212109" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (61.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : exit status 80 (39.126532571s)

                                                
                                                
-- stdout --
	* [auto-205230] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node auto-205230 in cluster auto-205230
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-205230" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:28:27.358403   17360 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:28:27.358578   17360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:28:27.358583   17360 out.go:309] Setting ErrFile to fd 2...
	I1025 21:28:27.358587   17360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:28:27.358702   17360 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:28:27.359175   17360 out.go:303] Setting JSON to false
	I1025 21:28:27.373909   17360 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5276,"bootTime":1666753231,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:28:27.374009   17360 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:28:27.396085   17360 out.go:177] * [auto-205230] minikube v1.27.1 on Darwin 12.6
	I1025 21:28:27.417835   17360 notify.go:220] Checking for updates...
	I1025 21:28:27.440061   17360 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:28:27.462080   17360 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:28:27.483752   17360 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:28:27.531885   17360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:28:27.574766   17360 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:28:27.597634   17360 config.go:180] Loaded profile config "cert-expiration-212703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:28:27.597804   17360 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:28:27.597881   17360 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:28:27.664570   17360 docker.go:137] docker version: linux-20.10.17
	I1025 21:28:27.664682   17360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:28:27.793203   17360 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:28:27.735798119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:28:27.817589   17360 out.go:177] * Using the docker driver based on user configuration
	I1025 21:28:27.860344   17360 start.go:282] selected driver: docker
	I1025 21:28:27.860380   17360 start.go:808] validating driver "docker" against <nil>
	I1025 21:28:27.860440   17360 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:28:27.863790   17360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:28:27.991715   17360 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:28:27.93547003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:28:27.991837   17360 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:28:27.991981   17360 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:28:28.013650   17360 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:28:28.035519   17360 cni.go:95] Creating CNI manager for ""
	I1025 21:28:28.035563   17360 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:28:28.035578   17360 start_flags.go:317] config:
	{Name:auto-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:auto-205230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:28:28.057314   17360 out.go:177] * Starting control plane node auto-205230 in cluster auto-205230
	I1025 21:28:28.078329   17360 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:28:28.099434   17360 out.go:177] * Pulling base image ...
	I1025 21:28:28.120325   17360 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:28:28.120338   17360 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:28:28.120396   17360 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:28:28.120414   17360 cache.go:57] Caching tarball of preloaded images
	I1025 21:28:28.120604   17360 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:28:28.120628   17360 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:28:28.121553   17360 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/auto-205230/config.json ...
	I1025 21:28:28.121663   17360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/auto-205230/config.json: {Name:mk1fdb974b15106fb78cb2848aadebe19bd1dd12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:28:28.183271   17360 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:28:28.183291   17360 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:28:28.183308   17360 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:28:28.183348   17360 start.go:364] acquiring machines lock for auto-205230: {Name:mk04062eefe1f5ebdc0219dae0f92bb96f78dc47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:28:28.183501   17360 start.go:368] acquired machines lock for "auto-205230" in 140.932µs
	I1025 21:28:28.183526   17360 start.go:93] Provisioning new machine with config: &{Name:auto-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:auto-205230 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:28:28.183618   17360 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:28:28.205194   17360 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:28:28.205556   17360 start.go:159] libmachine.API.Create for "auto-205230" (driver="docker")
	I1025 21:28:28.205600   17360 client.go:168] LocalClient.Create starting
	I1025 21:28:28.205721   17360 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:28:28.205784   17360 main.go:134] libmachine: Decoding PEM data...
	I1025 21:28:28.205807   17360 main.go:134] libmachine: Parsing certificate...
	I1025 21:28:28.205909   17360 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:28:28.205974   17360 main.go:134] libmachine: Decoding PEM data...
	I1025 21:28:28.205995   17360 main.go:134] libmachine: Parsing certificate...
	I1025 21:28:28.227011   17360 cli_runner.go:164] Run: docker network inspect auto-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:28:28.289067   17360 cli_runner.go:211] docker network inspect auto-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:28:28.289159   17360 network_create.go:272] running [docker network inspect auto-205230] to gather additional debugging logs...
	I1025 21:28:28.289173   17360 cli_runner.go:164] Run: docker network inspect auto-205230
	W1025 21:28:28.349558   17360 cli_runner.go:211] docker network inspect auto-205230 returned with exit code 1
	I1025 21:28:28.349581   17360 network_create.go:275] error running [docker network inspect auto-205230]: docker network inspect auto-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-205230
	I1025 21:28:28.349597   17360 network_create.go:277] output of [docker network inspect auto-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-205230
	
	** /stderr **
	I1025 21:28:28.349685   17360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:28:28.410619   17360 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000bb2320] misses:0}
	I1025 21:28:28.410653   17360 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:28.410668   17360 network_create.go:115] attempt to create docker network auto-205230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:28:28.410730   17360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-205230 auto-205230
	W1025 21:28:28.471007   17360 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-205230 auto-205230 returned with exit code 1
	W1025 21:28:28.471060   17360 network_create.go:107] failed to create docker network auto-205230 192.168.49.0/24, will retry: subnet is taken
	I1025 21:28:28.471334   17360 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320] amended:false}} dirty:map[] misses:0}
	I1025 21:28:28.471349   17360 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:28.471547   17360 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320] amended:true}} dirty:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8] misses:0}
	I1025 21:28:28.471559   17360 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:28.471597   17360 network_create.go:115] attempt to create docker network auto-205230 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:28:28.471661   17360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-205230 auto-205230
	W1025 21:28:28.533416   17360 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-205230 auto-205230 returned with exit code 1
	W1025 21:28:28.533445   17360 network_create.go:107] failed to create docker network auto-205230 192.168.58.0/24, will retry: subnet is taken
	I1025 21:28:28.533723   17360 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320] amended:true}} dirty:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8] misses:1}
	I1025 21:28:28.533738   17360 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:28.533936   17360 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320] amended:true}} dirty:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8 192.168.67.0:0xc00069a440] misses:1}
	I1025 21:28:28.533946   17360 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:28.533953   17360 network_create.go:115] attempt to create docker network auto-205230 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:28:28.534012   17360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-205230 auto-205230
	I1025 21:28:28.624902   17360 network_create.go:99] docker network auto-205230 192.168.67.0/24 created
	I1025 21:28:28.624939   17360 kic.go:106] calculated static IP "192.168.67.2" for the "auto-205230" container
	I1025 21:28:28.625049   17360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:28:28.687442   17360 cli_runner.go:164] Run: docker volume create auto-205230 --label name.minikube.sigs.k8s.io=auto-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:28:28.749011   17360 oci.go:103] Successfully created a docker volume auto-205230
	I1025 21:28:28.749143   17360 cli_runner.go:164] Run: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:28:28.964564   17360 cli_runner.go:211] docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:28:28.964655   17360 client.go:171] LocalClient.Create took 759.042162ms
	I1025 21:28:30.967046   17360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:28:30.967166   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:31.030631   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:31.030725   17360 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:31.309330   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:31.373943   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:31.374016   17360 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:31.916664   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:31.983122   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:31.983193   17360 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:32.640731   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:32.703977   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	W1025 21:28:32.704060   17360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	
	W1025 21:28:32.704078   17360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:32.704125   17360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:28:32.704170   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:32.764561   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:32.764643   17360 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:32.998079   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:33.064955   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:33.065048   17360 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:33.512170   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:33.575859   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:33.575934   17360 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:33.895819   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:33.961859   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:28:33.961950   17360 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:34.518292   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:28:34.580310   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	W1025 21:28:34.580393   17360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	
	W1025 21:28:34.580407   17360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:34.580429   17360 start.go:128] duration metric: createHost completed in 6.396792047s
	I1025 21:28:34.580438   17360 start.go:83] releasing machines lock for "auto-205230", held for 6.396915124s
	W1025 21:28:34.580452   17360 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:28:34.580864   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:34.641248   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:34.641298   17360 delete.go:82] Unable to get host status for auto-205230, assuming it has already been deleted: state: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	W1025 21:28:34.641451   17360 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:28:34.641460   17360 start.go:618] Will try again in 5 seconds ...
	I1025 21:28:39.643615   17360 start.go:364] acquiring machines lock for auto-205230: {Name:mk04062eefe1f5ebdc0219dae0f92bb96f78dc47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:28:39.643875   17360 start.go:368] acquired machines lock for "auto-205230" in 110.745µs
	I1025 21:28:39.643903   17360 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:28:39.643919   17360 fix.go:55] fixHost starting: 
	I1025 21:28:39.644288   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:39.708045   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:39.708093   17360 fix.go:103] recreateIfNeeded on auto-205230: state= err=unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:39.708108   17360 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:28:39.730116   17360 out.go:177] * docker "auto-205230" container is missing, will recreate.
	I1025 21:28:39.751742   17360 delete.go:124] DEMOLISHING auto-205230 ...
	I1025 21:28:39.751981   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:39.813825   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	W1025 21:28:39.813865   17360 stop.go:75] unable to get state: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:39.813879   17360 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:39.814235   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:39.873960   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:39.874011   17360 delete.go:82] Unable to get host status for auto-205230, assuming it has already been deleted: state: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:39.874092   17360 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-205230
	W1025 21:28:39.935092   17360 cli_runner.go:211] docker container inspect -f {{.Id}} auto-205230 returned with exit code 1
	I1025 21:28:39.935119   17360 kic.go:356] could not find the container auto-205230 to remove it. will try anyways
	I1025 21:28:39.935191   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:39.995282   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	W1025 21:28:39.995327   17360 oci.go:84] error getting container status, will try to delete anyways: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:39.995400   17360 cli_runner.go:164] Run: docker exec --privileged -t auto-205230 /bin/bash -c "sudo init 0"
	W1025 21:28:40.055393   17360 cli_runner.go:211] docker exec --privileged -t auto-205230 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:28:40.055423   17360 oci.go:646] error shutdown auto-205230: docker exec --privileged -t auto-205230 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:41.057816   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:41.120650   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:41.120698   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:41.120707   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:41.120727   17360 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:41.523506   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:41.586549   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:41.586591   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:41.586599   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:41.586618   17360 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:42.350439   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:42.415491   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:42.415531   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:42.415540   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:42.415560   17360 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:43.894434   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:43.959916   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:43.959955   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:43.959962   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:43.959982   17360 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:45.167662   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:45.232383   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:45.232439   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:45.232448   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:45.232467   17360 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:47.463991   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:47.529599   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:47.529672   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:47.529680   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:47.529707   17360 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:50.636600   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:50.700995   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:50.701034   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:50.701042   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:50.701061   17360 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:56.219778   17360 cli_runner.go:164] Run: docker container inspect auto-205230 --format={{.State.Status}}
	W1025 21:28:56.284460   17360 cli_runner.go:211] docker container inspect auto-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:28:56.284505   17360 oci.go:658] temporary error verifying shutdown: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:28:56.284515   17360 oci.go:660] temporary error: container auto-205230 status is  but expect it to be exited
	I1025 21:28:56.284540   17360 oci.go:88] couldn't shut down auto-205230 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-205230": docker container inspect auto-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	 
	I1025 21:28:56.284615   17360 cli_runner.go:164] Run: docker rm -f -v auto-205230
	I1025 21:28:56.347691   17360 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-205230
	W1025 21:28:56.408258   17360 cli_runner.go:211] docker container inspect -f {{.Id}} auto-205230 returned with exit code 1
	I1025 21:28:56.408365   17360 cli_runner.go:164] Run: docker network inspect auto-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:28:56.471193   17360 cli_runner.go:164] Run: docker network rm auto-205230
	W1025 21:28:56.581885   17360 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:28:56.581903   17360 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:28:57.584166   17360 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:28:57.606295   17360 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:28:57.606454   17360 start.go:159] libmachine.API.Create for "auto-205230" (driver="docker")
	I1025 21:28:57.606482   17360 client.go:168] LocalClient.Create starting
	I1025 21:28:57.606671   17360 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:28:57.606751   17360 main.go:134] libmachine: Decoding PEM data...
	I1025 21:28:57.606772   17360 main.go:134] libmachine: Parsing certificate...
	I1025 21:28:57.606847   17360 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:28:57.606895   17360 main.go:134] libmachine: Decoding PEM data...
	I1025 21:28:57.606916   17360 main.go:134] libmachine: Parsing certificate...
	I1025 21:28:57.607572   17360 cli_runner.go:164] Run: docker network inspect auto-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:28:57.671163   17360 cli_runner.go:211] docker network inspect auto-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:28:57.671237   17360 network_create.go:272] running [docker network inspect auto-205230] to gather additional debugging logs...
	I1025 21:28:57.671255   17360 cli_runner.go:164] Run: docker network inspect auto-205230
	W1025 21:28:57.731152   17360 cli_runner.go:211] docker network inspect auto-205230 returned with exit code 1
	I1025 21:28:57.731176   17360 network_create.go:275] error running [docker network inspect auto-205230]: docker network inspect auto-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-205230
	I1025 21:28:57.731212   17360 network_create.go:277] output of [docker network inspect auto-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-205230
	
	** /stderr **
	I1025 21:28:57.731282   17360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:28:57.793030   17360 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320] amended:true}} dirty:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8 192.168.67.0:0xc00069a440] misses:1}
	I1025 21:28:57.793058   17360 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:57.793321   17360 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320] amended:true}} dirty:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8 192.168.67.0:0xc00069a440] misses:2}
	I1025 21:28:57.793330   17360 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:57.793527   17360 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8 192.168.67.0:0xc00069a440] amended:false}} dirty:map[] misses:0}
	I1025 21:28:57.793536   17360 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:57.793723   17360 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8 192.168.67.0:0xc00069a440] amended:true}} dirty:map[192.168.49.0:0xc000bb2320 192.168.58.0:0xc00069a3f8 192.168.67.0:0xc00069a440 192.168.76.0:0xc000bb22f8] misses:0}
	I1025 21:28:57.793737   17360 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:28:57.793751   17360 network_create.go:115] attempt to create docker network auto-205230 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:28:57.793817   17360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-205230 auto-205230
	I1025 21:28:57.883362   17360 network_create.go:99] docker network auto-205230 192.168.76.0/24 created
	I1025 21:28:57.883390   17360 kic.go:106] calculated static IP "192.168.76.2" for the "auto-205230" container
	I1025 21:28:57.883502   17360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:28:57.944866   17360 cli_runner.go:164] Run: docker volume create auto-205230 --label name.minikube.sigs.k8s.io=auto-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:28:58.005674   17360 oci.go:103] Successfully created a docker volume auto-205230
	I1025 21:28:58.005784   17360 cli_runner.go:164] Run: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:28:58.140075   17360 cli_runner.go:211] docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:28:58.140114   17360 client.go:171] LocalClient.Create took 533.624904ms
	I1025 21:29:00.142515   17360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:00.142616   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:00.204551   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:00.204635   17360 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:00.405261   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:00.471231   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:00.471311   17360 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:00.915869   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:00.980301   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:00.980399   17360 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:01.386693   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:01.453453   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:01.453533   17360 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:02.047353   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:02.111109   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	W1025 21:29:02.111218   17360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	
	W1025 21:29:02.111256   17360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:02.111306   17360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:02.111348   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:02.172715   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:02.172820   17360 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:02.442877   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:02.505553   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:02.505631   17360 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:03.017245   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:03.081403   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:03.081520   17360 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:03.530022   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:03.593243   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	W1025 21:29:03.593327   17360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	
	W1025 21:29:03.593352   17360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:03.593366   17360 start.go:128] duration metric: createHost completed in 6.009161083s
	I1025 21:29:03.593429   17360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:03.593490   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:03.654005   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:03.654080   17360 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:03.969639   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:04.036609   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:04.036698   17360 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:04.304068   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:04.368859   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:04.368950   17360 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:05.139379   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:05.202820   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	W1025 21:29:05.202911   17360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	
	W1025 21:29:05.202943   17360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:05.203009   17360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:05.203068   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:05.263384   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:05.263471   17360 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:05.520238   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:05.582653   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:05.582739   17360 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:05.782348   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:05.847637   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	I1025 21:29:05.847753   17360 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:06.218741   17360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230
	W1025 21:29:06.282504   17360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230 returned with exit code 1
	W1025 21:29:06.282588   17360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	
	W1025 21:29:06.282606   17360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-205230
	I1025 21:29:06.282633   17360 fix.go:57] fixHost completed within 26.638650526s
	I1025 21:29:06.282642   17360 start.go:83] releasing machines lock for "auto-205230", held for 26.638691598s
	W1025 21:29:06.282846   17360 out.go:239] * Failed to start docker container. Running "minikube delete -p auto-205230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p auto-205230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:29:06.326245   17360 out.go:177] 
	W1025 21:29:06.347415   17360 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for auto-205230 container: docker run --rm --name auto-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-205230 --entrypoint /usr/bin/test -v auto-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:29:06.347446   17360 out.go:239] * 
	* 
	W1025 21:29:06.348612   17360 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:29:06.411187   17360 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (39.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 80 (39.265666256s)

                                                
                                                
-- stdout --
	* [kindnet-205231] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kindnet-205231 in cluster kindnet-205231
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-205231" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:29:07.497411   17563 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:29:07.497581   17563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:29:07.497586   17563 out.go:309] Setting ErrFile to fd 2...
	I1025 21:29:07.497590   17563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:29:07.498169   17563 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:29:07.499004   17563 out.go:303] Setting JSON to false
	I1025 21:29:07.513612   17563 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5316,"bootTime":1666753231,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:29:07.513712   17563 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:29:07.535010   17563 out.go:177] * [kindnet-205231] minikube v1.27.1 on Darwin 12.6
	I1025 21:29:07.576824   17563 notify.go:220] Checking for updates...
	I1025 21:29:07.598933   17563 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:29:07.620142   17563 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:29:07.641037   17563 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:29:07.663183   17563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:29:07.685119   17563 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:29:07.707684   17563 config.go:180] Loaded profile config "cert-expiration-212703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:29:07.707827   17563 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:29:07.707908   17563 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:29:07.776037   17563 docker.go:137] docker version: linux-20.10.17
	I1025 21:29:07.776192   17563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:29:07.903553   17563 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:29:07.832921245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:29:07.925493   17563 out.go:177] * Using the docker driver based on user configuration
	I1025 21:29:07.948083   17563 start.go:282] selected driver: docker
	I1025 21:29:07.948110   17563 start.go:808] validating driver "docker" against <nil>
	I1025 21:29:07.948143   17563 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:29:07.951770   17563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:29:08.079609   17563 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:29:08.009419343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:29:08.079740   17563 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:29:08.079884   17563 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:29:08.101760   17563 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:29:08.123340   17563 cni.go:95] Creating CNI manager for "kindnet"
	I1025 21:29:08.123445   17563 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:29:08.123468   17563 start_flags.go:317] config:
	{Name:kindnet-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kindnet-205231 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:29:08.145364   17563 out.go:177] * Starting control plane node kindnet-205231 in cluster kindnet-205231
	I1025 21:29:08.167582   17563 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:29:08.189455   17563 out.go:177] * Pulling base image ...
	I1025 21:29:08.253656   17563 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:29:08.253714   17563 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:29:08.253729   17563 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:29:08.253748   17563 cache.go:57] Caching tarball of preloaded images
	I1025 21:29:08.253965   17563 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:29:08.253986   17563 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:29:08.254779   17563 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kindnet-205231/config.json ...
	I1025 21:29:08.254988   17563 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kindnet-205231/config.json: {Name:mkc6fcb45e5d2c6e20ac48b4b3555210f410c7c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:08.316823   17563 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:29:08.316843   17563 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:29:08.316852   17563 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:29:08.316900   17563 start.go:364] acquiring machines lock for kindnet-205231: {Name:mk91af71cf08c6c2e42fc0a6d4c689b86e96ae31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:29:08.317045   17563 start.go:368] acquired machines lock for "kindnet-205231" in 133.27µs
	I1025 21:29:08.317071   17563 start.go:93] Provisioning new machine with config: &{Name:kindnet-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kindnet-205231 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:29:08.317189   17563 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:29:08.361473   17563 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:29:08.361925   17563 start.go:159] libmachine.API.Create for "kindnet-205231" (driver="docker")
	I1025 21:29:08.361970   17563 client.go:168] LocalClient.Create starting
	I1025 21:29:08.362103   17563 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:29:08.362167   17563 main.go:134] libmachine: Decoding PEM data...
	I1025 21:29:08.362192   17563 main.go:134] libmachine: Parsing certificate...
	I1025 21:29:08.362311   17563 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:29:08.362358   17563 main.go:134] libmachine: Decoding PEM data...
	I1025 21:29:08.362373   17563 main.go:134] libmachine: Parsing certificate...
	I1025 21:29:08.363200   17563 cli_runner.go:164] Run: docker network inspect kindnet-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:29:08.426822   17563 cli_runner.go:211] docker network inspect kindnet-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:29:08.426930   17563 network_create.go:272] running [docker network inspect kindnet-205231] to gather additional debugging logs...
	I1025 21:29:08.426945   17563 cli_runner.go:164] Run: docker network inspect kindnet-205231
	W1025 21:29:08.486672   17563 cli_runner.go:211] docker network inspect kindnet-205231 returned with exit code 1
	I1025 21:29:08.486693   17563 network_create.go:275] error running [docker network inspect kindnet-205231]: docker network inspect kindnet-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-205231
	I1025 21:29:08.486705   17563 network_create.go:277] output of [docker network inspect kindnet-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-205231
	
	** /stderr **
	I1025 21:29:08.486768   17563 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:29:08.548192   17563 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8] misses:0}
	I1025 21:29:08.548237   17563 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:08.548253   17563 network_create.go:115] attempt to create docker network kindnet-205231 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:29:08.548322   17563 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-205231 kindnet-205231
	W1025 21:29:08.608889   17563 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-205231 kindnet-205231 returned with exit code 1
	W1025 21:29:08.608953   17563 network_create.go:107] failed to create docker network kindnet-205231 192.168.49.0/24, will retry: subnet is taken
	I1025 21:29:08.609196   17563 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8] amended:false}} dirty:map[] misses:0}
	I1025 21:29:08.609210   17563 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:08.609472   17563 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730] misses:0}
	I1025 21:29:08.609486   17563 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:08.609496   17563 network_create.go:115] attempt to create docker network kindnet-205231 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:29:08.609553   17563 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-205231 kindnet-205231
	W1025 21:29:08.669620   17563 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-205231 kindnet-205231 returned with exit code 1
	W1025 21:29:08.669664   17563 network_create.go:107] failed to create docker network kindnet-205231 192.168.58.0/24, will retry: subnet is taken
	I1025 21:29:08.669931   17563 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730] misses:1}
	I1025 21:29:08.669948   17563 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:08.670167   17563 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730 192.168.67.0:0xc000b0a768] misses:1}
	I1025 21:29:08.670180   17563 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:08.670187   17563 network_create.go:115] attempt to create docker network kindnet-205231 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:29:08.670254   17563 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-205231 kindnet-205231
	I1025 21:29:08.761294   17563 network_create.go:99] docker network kindnet-205231 192.168.67.0/24 created
	I1025 21:29:08.761328   17563 kic.go:106] calculated static IP "192.168.67.2" for the "kindnet-205231" container
	I1025 21:29:08.761435   17563 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:29:08.822316   17563 cli_runner.go:164] Run: docker volume create kindnet-205231 --label name.minikube.sigs.k8s.io=kindnet-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:29:08.884069   17563 oci.go:103] Successfully created a docker volume kindnet-205231
	I1025 21:29:08.884162   17563 cli_runner.go:164] Run: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:29:09.099424   17563 cli_runner.go:211] docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:29:09.099503   17563 client.go:171] LocalClient.Create took 737.522385ms
	I1025 21:29:11.100434   17563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:11.100680   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:11.166807   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:11.166905   17563 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:11.445447   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:11.510028   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:11.510170   17563 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:12.052794   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:12.117528   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:12.117606   17563 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:12.773132   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:12.839455   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	W1025 21:29:12.839557   17563 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	
	W1025 21:29:12.839580   17563 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:12.839639   17563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:12.839720   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:12.899952   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:12.900026   17563 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:13.131422   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:13.234979   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:13.235094   17563 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:13.682553   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:13.749045   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:13.749149   17563 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:14.069762   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:14.137177   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:14.137263   17563 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:14.693607   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:14.760545   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	W1025 21:29:14.760629   17563 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	
	W1025 21:29:14.760649   17563 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:14.760671   17563 start.go:128] duration metric: createHost completed in 6.44345932s
	I1025 21:29:14.760680   17563 start.go:83] releasing machines lock for "kindnet-205231", held for 6.443611816s
	W1025 21:29:14.760693   17563 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:29:14.761069   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:14.821253   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:14.821296   17563 delete.go:82] Unable to get host status for kindnet-205231, assuming it has already been deleted: state: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	W1025 21:29:14.821458   17563 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:29:14.821467   17563 start.go:618] Will try again in 5 seconds ...
	I1025 21:29:19.833164   17563 start.go:364] acquiring machines lock for kindnet-205231: {Name:mk91af71cf08c6c2e42fc0a6d4c689b86e96ae31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:29:19.833316   17563 start.go:368] acquired machines lock for "kindnet-205231" in 112.619µs
	I1025 21:29:19.833347   17563 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:29:19.833361   17563 fix.go:55] fixHost starting: 
	I1025 21:29:19.833741   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:19.896404   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:19.896443   17563 fix.go:103] recreateIfNeeded on kindnet-205231: state= err=unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:19.896464   17563 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:29:19.940353   17563 out.go:177] * docker "kindnet-205231" container is missing, will recreate.
	I1025 21:29:19.962070   17563 delete.go:124] DEMOLISHING kindnet-205231 ...
	I1025 21:29:19.962300   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:20.024394   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:29:20.024435   17563 stop.go:75] unable to get state: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:20.024449   17563 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:20.024793   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:20.085202   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:20.085316   17563 delete.go:82] Unable to get host status for kindnet-205231, assuming it has already been deleted: state: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:20.085395   17563 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-205231
	W1025 21:29:20.144917   17563 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-205231 returned with exit code 1
	I1025 21:29:20.145017   17563 kic.go:356] could not find the container kindnet-205231 to remove it. will try anyways
	I1025 21:29:20.145097   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:20.205186   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:29:20.205309   17563 oci.go:84] error getting container status, will try to delete anyways: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:20.205383   17563 cli_runner.go:164] Run: docker exec --privileged -t kindnet-205231 /bin/bash -c "sudo init 0"
	W1025 21:29:20.267793   17563 cli_runner.go:211] docker exec --privileged -t kindnet-205231 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:29:20.267896   17563 oci.go:646] error shutdown kindnet-205231: docker exec --privileged -t kindnet-205231 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:21.272333   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:21.339297   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:21.339341   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:21.339350   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:21.339378   17563 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:21.742963   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:21.806341   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:21.806476   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:21.806486   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:21.806505   17563 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:22.569982   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:22.634703   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:22.634755   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:22.634763   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:22.634784   17563 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:24.116852   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:24.181446   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:24.181488   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:24.181495   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:24.181515   17563 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:25.391137   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:25.456417   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:25.456478   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:25.456487   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:25.456507   17563 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:27.691263   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:27.757377   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:27.757419   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:27.757426   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:27.757445   17563 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:30.867567   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:30.931237   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:30.931278   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:30.931293   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:30.931314   17563 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:36.454746   17563 cli_runner.go:164] Run: docker container inspect kindnet-205231 --format={{.State.Status}}
	W1025 21:29:36.519972   17563 cli_runner.go:211] docker container inspect kindnet-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:36.520014   17563 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:36.520020   17563 oci.go:660] temporary error: container kindnet-205231 status is  but expect it to be exited
	I1025 21:29:36.520055   17563 oci.go:88] couldn't shut down kindnet-205231 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-205231": docker container inspect kindnet-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	 
	I1025 21:29:36.520125   17563 cli_runner.go:164] Run: docker rm -f -v kindnet-205231
	I1025 21:29:36.583284   17563 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-205231
	W1025 21:29:36.643740   17563 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-205231 returned with exit code 1
	I1025 21:29:36.643947   17563 cli_runner.go:164] Run: docker network inspect kindnet-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:29:36.703988   17563 cli_runner.go:164] Run: docker network rm kindnet-205231
	W1025 21:29:36.808576   17563 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:29:36.808661   17563 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:29:37.811493   17563 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:29:37.833744   17563 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:29:37.833898   17563 start.go:159] libmachine.API.Create for "kindnet-205231" (driver="docker")
	I1025 21:29:37.833934   17563 client.go:168] LocalClient.Create starting
	I1025 21:29:37.834061   17563 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:29:37.834125   17563 main.go:134] libmachine: Decoding PEM data...
	I1025 21:29:37.834144   17563 main.go:134] libmachine: Parsing certificate...
	I1025 21:29:37.834228   17563 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:29:37.834275   17563 main.go:134] libmachine: Decoding PEM data...
	I1025 21:29:37.834293   17563 main.go:134] libmachine: Parsing certificate...
	I1025 21:29:37.834887   17563 cli_runner.go:164] Run: docker network inspect kindnet-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:29:37.900905   17563 cli_runner.go:211] docker network inspect kindnet-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:29:37.900986   17563 network_create.go:272] running [docker network inspect kindnet-205231] to gather additional debugging logs...
	I1025 21:29:37.901000   17563 cli_runner.go:164] Run: docker network inspect kindnet-205231
	W1025 21:29:37.961972   17563 cli_runner.go:211] docker network inspect kindnet-205231 returned with exit code 1
	I1025 21:29:37.961999   17563 network_create.go:275] error running [docker network inspect kindnet-205231]: docker network inspect kindnet-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-205231
	I1025 21:29:37.962011   17563 network_create.go:277] output of [docker network inspect kindnet-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-205231
	
	** /stderr **
	I1025 21:29:37.962076   17563 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:29:38.023476   17563 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730 192.168.67.0:0xc000b0a768] misses:1}
	I1025 21:29:38.023503   17563 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:38.023743   17563 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730 192.168.67.0:0xc000b0a768] misses:2}
	I1025 21:29:38.023752   17563 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:38.023946   17563 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730 192.168.67.0:0xc000b0a768] amended:false}} dirty:map[] misses:0}
	I1025 21:29:38.023955   17563 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:38.024142   17563 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730 192.168.67.0:0xc000b0a768] amended:true}} dirty:map[192.168.49.0:0xc000b0a6f8 192.168.58.0:0xc000b0a730 192.168.67.0:0xc000b0a768 192.168.76.0:0xc000118248] misses:0}
	I1025 21:29:38.024156   17563 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:38.024163   17563 network_create.go:115] attempt to create docker network kindnet-205231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:29:38.024223   17563 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-205231 kindnet-205231
	I1025 21:29:38.116717   17563 network_create.go:99] docker network kindnet-205231 192.168.76.0/24 created
	I1025 21:29:38.116754   17563 kic.go:106] calculated static IP "192.168.76.2" for the "kindnet-205231" container
	I1025 21:29:38.116869   17563 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:29:38.178144   17563 cli_runner.go:164] Run: docker volume create kindnet-205231 --label name.minikube.sigs.k8s.io=kindnet-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:29:38.297497   17563 oci.go:103] Successfully created a docker volume kindnet-205231
	I1025 21:29:38.297608   17563 cli_runner.go:164] Run: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:29:38.444057   17563 cli_runner.go:211] docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:29:38.444093   17563 client.go:171] LocalClient.Create took 609.741149ms
	I1025 21:29:40.447734   17563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:40.447857   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:40.512739   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:40.512835   17563 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:40.711599   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:40.775634   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:40.775740   17563 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:41.220537   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:41.286081   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:41.286175   17563 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:41.692945   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:41.757963   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:41.758132   17563 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:42.353170   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:42.416735   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	W1025 21:29:42.416836   17563 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	
	W1025 21:29:42.416853   17563 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:42.416900   17563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:42.416942   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:42.478430   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:42.478524   17563 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:42.748749   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:42.810498   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:42.810595   17563 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:43.324065   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:43.389338   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:43.389429   17563 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:43.838058   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:43.904297   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	W1025 21:29:43.904400   17563 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	
	W1025 21:29:43.904421   17563 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:43.904428   17563 start.go:128] duration metric: createHost completed in 6.089426968s
	I1025 21:29:43.904507   17563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:43.904557   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:43.964483   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:43.964565   17563 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:44.278245   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:44.340898   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:44.340992   17563 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:44.608469   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:44.673246   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:44.673334   17563 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:45.442094   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:45.510015   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	W1025 21:29:45.510106   17563 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	
	W1025 21:29:45.510125   17563 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:45.510168   17563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:45.510211   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:45.569887   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:45.569980   17563 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:45.826815   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:45.894286   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:45.894381   17563 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:46.093544   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:46.157733   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	I1025 21:29:46.157814   17563 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:46.530744   17563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231
	W1025 21:29:46.597783   17563 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231 returned with exit code 1
	W1025 21:29:46.597881   17563 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	
	W1025 21:29:46.597909   17563 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-205231
	I1025 21:29:46.597920   17563 fix.go:57] fixHost completed within 26.736814733s
	I1025 21:29:46.597929   17563 start.go:83] releasing machines lock for "kindnet-205231", held for 26.736856201s
	W1025 21:29:46.598109   17563 out.go:239] * Failed to start docker container. Running "minikube delete -p kindnet-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p kindnet-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:29:46.641669   17563 out.go:177] 
	W1025 21:29:46.663982   17563 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-205231 container: docker run --rm --name kindnet-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-205231 --entrypoint /usr/bin/test -v kindnet-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:29:46.664011   17563 out.go:239] * 
	* 
	W1025 21:29:46.665191   17563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:29:46.728887   17563 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (39.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (40.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
E1025 21:30:07.255722    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory
E1025 21:30:12.501122    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 21:30:13.038681    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cilium-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : exit status 80 (40.245666743s)

                                                
                                                
-- stdout --
	* [cilium-205231] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cilium-205231 in cluster cilium-205231
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-205231" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:29:47.822514   17769 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:29:47.822684   17769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:29:47.822693   17769 out.go:309] Setting ErrFile to fd 2...
	I1025 21:29:47.822699   17769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:29:47.822805   17769 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:29:47.823292   17769 out.go:303] Setting JSON to false
	I1025 21:29:47.837852   17769 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5356,"bootTime":1666753231,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:29:47.837949   17769 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:29:47.859893   17769 out.go:177] * [cilium-205231] minikube v1.27.1 on Darwin 12.6
	I1025 21:29:47.902856   17769 notify.go:220] Checking for updates...
	I1025 21:29:47.924898   17769 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:29:47.946883   17769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:29:47.968834   17769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:29:47.990012   17769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:29:48.011947   17769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:29:48.034443   17769 config.go:180] Loaded profile config "cert-expiration-212703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:29:48.034586   17769 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:29:48.034669   17769 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:29:48.101561   17769 docker.go:137] docker version: linux-20.10.17
	I1025 21:29:48.101682   17769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:29:48.228404   17769 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:29:48.159912427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:29:48.277115   17769 out.go:177] * Using the docker driver based on user configuration
	I1025 21:29:48.298979   17769 start.go:282] selected driver: docker
	I1025 21:29:48.299005   17769 start.go:808] validating driver "docker" against <nil>
	I1025 21:29:48.299028   17769 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:29:48.302378   17769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:29:48.431435   17769 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:29:48.362162158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:29:48.431568   17769 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:29:48.431696   17769 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:29:48.453382   17769 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:29:48.475302   17769 cni.go:95] Creating CNI manager for "cilium"
	I1025 21:29:48.475379   17769 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1025 21:29:48.475398   17769 start_flags.go:317] config:
	{Name:cilium-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-205231 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:29:48.497195   17769 out.go:177] * Starting control plane node cilium-205231 in cluster cilium-205231
	I1025 21:29:48.541456   17769 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:29:48.563043   17769 out.go:177] * Pulling base image ...
	I1025 21:29:48.606442   17769 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:29:48.606520   17769 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:29:48.606588   17769 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:29:48.606617   17769 cache.go:57] Caching tarball of preloaded images
	I1025 21:29:48.606851   17769 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:29:48.606872   17769 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:29:48.607836   17769 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/cilium-205231/config.json ...
	I1025 21:29:48.607964   17769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/cilium-205231/config.json: {Name:mkf3fc71147619117efeb4e044484b653faa5cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:48.669121   17769 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:29:48.669139   17769 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:29:48.669147   17769 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:29:48.669210   17769 start.go:364] acquiring machines lock for cilium-205231: {Name:mk82cfbfc4a366417381556aa67f8f5062cc5160 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:29:48.669370   17769 start.go:368] acquired machines lock for "cilium-205231" in 147.906µs
	I1025 21:29:48.669397   17769 start.go:93] Provisioning new machine with config: &{Name:cilium-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-205231 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:29:48.669453   17769 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:29:48.713131   17769 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:29:48.713581   17769 start.go:159] libmachine.API.Create for "cilium-205231" (driver="docker")
	I1025 21:29:48.713623   17769 client.go:168] LocalClient.Create starting
	I1025 21:29:48.713762   17769 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:29:48.713826   17769 main.go:134] libmachine: Decoding PEM data...
	I1025 21:29:48.713851   17769 main.go:134] libmachine: Parsing certificate...
	I1025 21:29:48.713945   17769 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:29:48.713993   17769 main.go:134] libmachine: Decoding PEM data...
	I1025 21:29:48.714022   17769 main.go:134] libmachine: Parsing certificate...
	I1025 21:29:48.714782   17769 cli_runner.go:164] Run: docker network inspect cilium-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:29:48.776600   17769 cli_runner.go:211] docker network inspect cilium-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:29:48.776676   17769 network_create.go:272] running [docker network inspect cilium-205231] to gather additional debugging logs...
	I1025 21:29:48.776701   17769 cli_runner.go:164] Run: docker network inspect cilium-205231
	W1025 21:29:48.837008   17769 cli_runner.go:211] docker network inspect cilium-205231 returned with exit code 1
	I1025 21:29:48.837035   17769 network_create.go:275] error running [docker network inspect cilium-205231]: docker network inspect cilium-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-205231
	I1025 21:29:48.837048   17769 network_create.go:277] output of [docker network inspect cilium-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-205231
	
	** /stderr **
	I1025 21:29:48.837121   17769 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:29:48.898170   17769 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a08128] misses:0}
	I1025 21:29:48.898207   17769 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:48.898223   17769 network_create.go:115] attempt to create docker network cilium-205231 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:29:48.898286   17769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-205231 cilium-205231
	W1025 21:29:48.959652   17769 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-205231 cilium-205231 returned with exit code 1
	W1025 21:29:48.959690   17769 network_create.go:107] failed to create docker network cilium-205231 192.168.49.0/24, will retry: subnet is taken
	I1025 21:29:48.959953   17769 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128] amended:false}} dirty:map[] misses:0}
	I1025 21:29:48.959968   17769 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:48.960180   17769 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128] amended:true}} dirty:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888] misses:0}
	I1025 21:29:48.960196   17769 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:48.960211   17769 network_create.go:115] attempt to create docker network cilium-205231 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:29:48.960271   17769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-205231 cilium-205231
	W1025 21:29:49.021726   17769 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-205231 cilium-205231 returned with exit code 1
	W1025 21:29:49.021758   17769 network_create.go:107] failed to create docker network cilium-205231 192.168.58.0/24, will retry: subnet is taken
	I1025 21:29:49.022014   17769 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128] amended:true}} dirty:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888] misses:1}
	I1025 21:29:49.022029   17769 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:49.022258   17769 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128] amended:true}} dirty:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888 192.168.67.0:0xc00063a6f0] misses:1}
	I1025 21:29:49.022270   17769 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:29:49.022278   17769 network_create.go:115] attempt to create docker network cilium-205231 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:29:49.022345   17769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-205231 cilium-205231
	I1025 21:29:49.113375   17769 network_create.go:99] docker network cilium-205231 192.168.67.0/24 created
	I1025 21:29:49.113418   17769 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-205231" container
	I1025 21:29:49.113536   17769 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:29:49.175716   17769 cli_runner.go:164] Run: docker volume create cilium-205231 --label name.minikube.sigs.k8s.io=cilium-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:29:49.237468   17769 oci.go:103] Successfully created a docker volume cilium-205231
	I1025 21:29:49.237557   17769 cli_runner.go:164] Run: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:29:49.463626   17769 cli_runner.go:211] docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:29:49.463673   17769 client.go:171] LocalClient.Create took 749.784443ms
	I1025 21:29:51.466598   17769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:51.466814   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:51.530546   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:51.530656   17769 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:51.807275   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:51.874513   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:51.874592   17769 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:52.415401   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:52.481579   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:52.481680   17769 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:53.139294   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:53.205002   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	W1025 21:29:53.205129   17769 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	
	W1025 21:29:53.205145   17769 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:53.205190   17769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:53.205230   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:53.266169   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:53.266246   17769 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:53.498000   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:53.562697   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:53.562780   17769 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:54.009484   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:54.075877   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:54.075957   17769 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:54.396024   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:54.458722   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:29:54.458819   17769 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:55.015335   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:29:55.080141   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	W1025 21:29:55.080258   17769 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	
	W1025 21:29:55.080278   17769 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:29:55.080286   17769 start.go:128] duration metric: createHost completed in 6.409026421s
	I1025 21:29:55.080299   17769 start.go:83] releasing machines lock for "cilium-205231", held for 6.409120718s
	W1025 21:29:55.080313   17769 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:29:55.080701   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:29:55.142421   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:29:55.142470   17769 delete.go:82] Unable to get host status for cilium-205231, assuming it has already been deleted: state: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	W1025 21:29:55.142641   17769 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:29:55.142650   17769 start.go:618] Will try again in 5 seconds ...
	I1025 21:30:00.145915   17769 start.go:364] acquiring machines lock for cilium-205231: {Name:mk82cfbfc4a366417381556aa67f8f5062cc5160 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:30:00.146056   17769 start.go:368] acquired machines lock for "cilium-205231" in 104.904µs
	I1025 21:30:00.146096   17769 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:30:00.146109   17769 fix.go:55] fixHost starting: 
	I1025 21:30:00.146467   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:00.209440   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:00.209482   17769 fix.go:103] recreateIfNeeded on cilium-205231: state= err=unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:00.209517   17769 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:30:00.253253   17769 out.go:177] * docker "cilium-205231" container is missing, will recreate.
	I1025 21:30:00.275053   17769 delete.go:124] DEMOLISHING cilium-205231 ...
	I1025 21:30:00.275316   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:00.336732   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:30:00.336769   17769 stop.go:75] unable to get state: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:00.336783   17769 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:00.337150   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:00.397869   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:00.397913   17769 delete.go:82] Unable to get host status for cilium-205231, assuming it has already been deleted: state: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:00.397999   17769 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-205231
	W1025 21:30:00.459173   17769 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-205231 returned with exit code 1
	I1025 21:30:00.459201   17769 kic.go:356] could not find the container cilium-205231 to remove it. will try anyways
	I1025 21:30:00.459282   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:00.519775   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:30:00.519839   17769 oci.go:84] error getting container status, will try to delete anyways: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:00.519916   17769 cli_runner.go:164] Run: docker exec --privileged -t cilium-205231 /bin/bash -c "sudo init 0"
	W1025 21:30:00.580298   17769 cli_runner.go:211] docker exec --privileged -t cilium-205231 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:30:00.580319   17769 oci.go:646] error shutdown cilium-205231: docker exec --privileged -t cilium-205231 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:01.581231   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:01.644729   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:01.644771   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:01.644777   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:01.644793   17769 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:02.047012   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:02.110759   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:02.110808   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:02.110817   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:02.110835   17769 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:02.874766   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:02.937776   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:02.937816   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:02.937824   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:02.937842   17769 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:04.418251   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:04.482149   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:04.482200   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:04.482206   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:04.482223   17769 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:05.689121   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:05.754595   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:05.754649   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:05.754657   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:05.754678   17769 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:07.985211   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:08.048298   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:08.048337   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:08.048345   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:08.048364   17769 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:11.155101   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:11.219509   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:11.219555   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:11.219563   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:11.219583   17769 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:16.740449   17769 cli_runner.go:164] Run: docker container inspect cilium-205231 --format={{.State.Status}}
	W1025 21:30:16.805796   17769 cli_runner.go:211] docker container inspect cilium-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:16.805831   17769 oci.go:658] temporary error verifying shutdown: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:16.805839   17769 oci.go:660] temporary error: container cilium-205231 status is  but expect it to be exited
	I1025 21:30:16.805866   17769 oci.go:88] couldn't shut down cilium-205231 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-205231": docker container inspect cilium-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	 
	I1025 21:30:16.805925   17769 cli_runner.go:164] Run: docker rm -f -v cilium-205231
	I1025 21:30:16.869986   17769 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-205231
	W1025 21:30:16.929994   17769 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-205231 returned with exit code 1
	I1025 21:30:16.930101   17769 cli_runner.go:164] Run: docker network inspect cilium-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:16.992190   17769 cli_runner.go:164] Run: docker network rm cilium-205231
	W1025 21:30:17.100754   17769 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:30:17.100772   17769 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:30:18.101328   17769 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:30:18.123558   17769 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:30:18.123769   17769 start.go:159] libmachine.API.Create for "cilium-205231" (driver="docker")
	I1025 21:30:18.123795   17769 client.go:168] LocalClient.Create starting
	I1025 21:30:18.123983   17769 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:30:18.124099   17769 main.go:134] libmachine: Decoding PEM data...
	I1025 21:30:18.124123   17769 main.go:134] libmachine: Parsing certificate...
	I1025 21:30:18.124218   17769 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:30:18.124265   17769 main.go:134] libmachine: Decoding PEM data...
	I1025 21:30:18.124289   17769 main.go:134] libmachine: Parsing certificate...
	I1025 21:30:18.145709   17769 cli_runner.go:164] Run: docker network inspect cilium-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:30:18.211591   17769 cli_runner.go:211] docker network inspect cilium-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:30:18.211678   17769 network_create.go:272] running [docker network inspect cilium-205231] to gather additional debugging logs...
	I1025 21:30:18.211702   17769 cli_runner.go:164] Run: docker network inspect cilium-205231
	W1025 21:30:18.272145   17769 cli_runner.go:211] docker network inspect cilium-205231 returned with exit code 1
	I1025 21:30:18.272167   17769 network_create.go:275] error running [docker network inspect cilium-205231]: docker network inspect cilium-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-205231
	I1025 21:30:18.272180   17769 network_create.go:277] output of [docker network inspect cilium-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-205231
	
	** /stderr **
	I1025 21:30:18.272243   17769 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:18.333927   17769 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128] amended:true}} dirty:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888 192.168.67.0:0xc00063a6f0] misses:1}
	I1025 21:30:18.333955   17769 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:18.334162   17769 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128] amended:true}} dirty:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888 192.168.67.0:0xc00063a6f0] misses:2}
	I1025 21:30:18.334170   17769 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:18.334367   17769 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888 192.168.67.0:0xc00063a6f0] amended:false}} dirty:map[] misses:0}
	I1025 21:30:18.334375   17769 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:18.334573   17769 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888 192.168.67.0:0xc00063a6f0] amended:true}} dirty:map[192.168.49.0:0xc000a08128 192.168.58.0:0xc000576888 192.168.67.0:0xc00063a6f0 192.168.76.0:0xc000a082b0] misses:0}
	I1025 21:30:18.334594   17769 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:18.334607   17769 network_create.go:115] attempt to create docker network cilium-205231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:30:18.334682   17769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-205231 cilium-205231
	I1025 21:30:18.424497   17769 network_create.go:99] docker network cilium-205231 192.168.76.0/24 created
	I1025 21:30:18.424525   17769 kic.go:106] calculated static IP "192.168.76.2" for the "cilium-205231" container
	I1025 21:30:18.424625   17769 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:30:18.486053   17769 cli_runner.go:164] Run: docker volume create cilium-205231 --label name.minikube.sigs.k8s.io=cilium-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:30:18.546040   17769 oci.go:103] Successfully created a docker volume cilium-205231
	I1025 21:30:18.546161   17769 cli_runner.go:164] Run: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:30:18.681854   17769 cli_runner.go:211] docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:30:18.681891   17769 client.go:171] LocalClient.Create took 558.061049ms
	I1025 21:30:20.684279   17769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:30:20.684454   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:20.748113   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:20.748203   17769 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:20.948900   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:21.012119   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:21.012203   17769 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:21.456768   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:21.522804   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:21.522890   17769 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:21.928262   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:21.992358   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:21.992443   17769 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:22.586489   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:22.651707   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	W1025 21:30:22.651794   17769 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	
	W1025 21:30:22.651811   17769 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:22.651886   17769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:30:22.651949   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:22.712417   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:22.712519   17769 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:22.980602   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:23.042617   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:23.042699   17769 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:23.556073   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:23.620174   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:23.620281   17769 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:24.068842   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:24.131687   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	W1025 21:30:24.131777   17769 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	
	W1025 21:30:24.131792   17769 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:24.131805   17769 start.go:128] duration metric: createHost completed in 6.030180284s
	I1025 21:30:24.131885   17769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:30:24.131929   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:24.192088   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:24.192171   17769 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:24.507706   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:24.572911   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:24.573001   17769 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:24.840340   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:24.905594   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:24.905700   17769 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:25.674838   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:26.744033   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:26.744054   17769 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: (1.069117735s)
	W1025 21:30:26.744176   17769 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	
	W1025 21:30:26.744232   17769 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:26.744323   17769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:30:26.744420   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:26.809044   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:26.809123   17769 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:27.067334   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:27.131900   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:27.131982   17769 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:27.331459   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:27.397798   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	I1025 21:30:27.397883   17769 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:27.769240   17769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231
	W1025 21:30:27.852598   17769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231 returned with exit code 1
	W1025 21:30:27.852703   17769 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	
	W1025 21:30:27.852728   17769 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-205231
	I1025 21:30:27.852736   17769 fix.go:57] fixHost completed within 27.704456075s
	I1025 21:30:27.852745   17769 start.go:83] releasing machines lock for "cilium-205231", held for 27.704507476s
	W1025 21:30:27.852969   17769 out.go:239] * Failed to start docker container. Running "minikube delete -p cilium-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p cilium-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:30:27.896353   17769 out.go:177] 
	W1025 21:30:27.918617   17769 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for cilium-205231 container: docker run --rm --name cilium-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-205231 --entrypoint /usr/bin/test -v cilium-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:30:27.918648   17769 out.go:239] * 
	* 
	W1025 21:30:27.921214   17769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:30:27.999491   17769 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (40.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (39.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 80 (39.457064747s)

                                                
                                                
-- stdout --
	* [calico-205231] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-205231 in cluster calico-205231
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-205231" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:30:29.092752   17998 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:30:29.092949   17998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:30:29.092954   17998 out.go:309] Setting ErrFile to fd 2...
	I1025 21:30:29.092958   17998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:30:29.093062   17998 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:30:29.093547   17998 out.go:303] Setting JSON to false
	I1025 21:30:29.108198   17998 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5398,"bootTime":1666753231,"procs":367,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:30:29.108289   17998 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:30:29.130077   17998 out.go:177] * [calico-205231] minikube v1.27.1 on Darwin 12.6
	I1025 21:30:29.152109   17998 notify.go:220] Checking for updates...
	I1025 21:30:29.174118   17998 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:30:29.196255   17998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:30:29.218043   17998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:30:29.239188   17998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:30:29.261270   17998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:30:29.283824   17998 config.go:180] Loaded profile config "cert-expiration-212703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:30:29.283955   17998 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:30:29.284034   17998 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:30:29.351112   17998 docker.go:137] docker version: linux-20.10.17
	I1025 21:30:29.351265   17998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:30:29.479103   17998 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:30:29.421919621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:30:29.501287   17998 out.go:177] * Using the docker driver based on user configuration
	I1025 21:30:29.522864   17998 start.go:282] selected driver: docker
	I1025 21:30:29.522905   17998 start.go:808] validating driver "docker" against <nil>
	I1025 21:30:29.522974   17998 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:30:29.526318   17998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:30:29.654267   17998 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:30:29.598356523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:30:29.654354   17998 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:30:29.654495   17998 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:30:29.676235   17998 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:30:29.698161   17998 cni.go:95] Creating CNI manager for "calico"
	I1025 21:30:29.698188   17998 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1025 21:30:29.698223   17998 start_flags.go:317] config:
	{Name:calico-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-205231 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:30:29.720123   17998 out.go:177] * Starting control plane node calico-205231 in cluster calico-205231
	I1025 21:30:29.763165   17998 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:30:29.785096   17998 out.go:177] * Pulling base image ...
	I1025 21:30:29.827190   17998 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:30:29.827200   17998 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:30:29.827276   17998 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:30:29.827323   17998 cache.go:57] Caching tarball of preloaded images
	I1025 21:30:29.827533   17998 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:30:29.827550   17998 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:30:29.828493   17998 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/calico-205231/config.json ...
	I1025 21:30:29.828615   17998 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/calico-205231/config.json: {Name:mk58351feca160d99260e071ae166a39cd5c9ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:30:29.892332   17998 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:30:29.892356   17998 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:30:29.892365   17998 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:30:29.892410   17998 start.go:364] acquiring machines lock for calico-205231: {Name:mk7076ab2a8012cf6a913e687e3fcf4a772b25f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:30:29.892552   17998 start.go:368] acquired machines lock for "calico-205231" in 131.043µs
	I1025 21:30:29.892577   17998 start.go:93] Provisioning new machine with config: &{Name:calico-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-205231 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:30:29.892640   17998 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:30:29.936157   17998 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:30:29.936569   17998 start.go:159] libmachine.API.Create for "calico-205231" (driver="docker")
	I1025 21:30:29.936610   17998 client.go:168] LocalClient.Create starting
	I1025 21:30:29.936742   17998 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:30:29.936812   17998 main.go:134] libmachine: Decoding PEM data...
	I1025 21:30:29.936836   17998 main.go:134] libmachine: Parsing certificate...
	I1025 21:30:29.936938   17998 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:30:29.936983   17998 main.go:134] libmachine: Decoding PEM data...
	I1025 21:30:29.937003   17998 main.go:134] libmachine: Parsing certificate...
	I1025 21:30:29.937791   17998 cli_runner.go:164] Run: docker network inspect calico-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:30:30.000319   17998 cli_runner.go:211] docker network inspect calico-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:30:30.000402   17998 network_create.go:272] running [docker network inspect calico-205231] to gather additional debugging logs...
	I1025 21:30:30.000415   17998 cli_runner.go:164] Run: docker network inspect calico-205231
	W1025 21:30:30.061511   17998 cli_runner.go:211] docker network inspect calico-205231 returned with exit code 1
	I1025 21:30:30.061532   17998 network_create.go:275] error running [docker network inspect calico-205231]: docker network inspect calico-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-205231
	I1025 21:30:30.061546   17998 network_create.go:277] output of [docker network inspect calico-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-205231
	
	** /stderr **
	I1025 21:30:30.061628   17998 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:30.125661   17998 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8] misses:0}
	I1025 21:30:30.125696   17998 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:30.125712   17998 network_create.go:115] attempt to create docker network calico-205231 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:30:30.125797   17998 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-205231 calico-205231
	W1025 21:30:30.185657   17998 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-205231 calico-205231 returned with exit code 1
	W1025 21:30:30.185691   17998 network_create.go:107] failed to create docker network calico-205231 192.168.49.0/24, will retry: subnet is taken
	I1025 21:30:30.185986   17998 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8] amended:false}} dirty:map[] misses:0}
	I1025 21:30:30.186001   17998 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:30.186212   17998 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640] misses:0}
	I1025 21:30:30.186227   17998 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:30.186234   17998 network_create.go:115] attempt to create docker network calico-205231 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:30:30.186303   17998 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-205231 calico-205231
	W1025 21:30:30.247099   17998 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-205231 calico-205231 returned with exit code 1
	W1025 21:30:30.247134   17998 network_create.go:107] failed to create docker network calico-205231 192.168.58.0/24, will retry: subnet is taken
	I1025 21:30:30.247405   17998 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640] misses:1}
	I1025 21:30:30.247421   17998 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:30.247627   17998 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640 192.168.67.0:0xc000c241d8] misses:1}
	I1025 21:30:30.247639   17998 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:30.247647   17998 network_create.go:115] attempt to create docker network calico-205231 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:30:30.247709   17998 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-205231 calico-205231
	I1025 21:30:30.336871   17998 network_create.go:99] docker network calico-205231 192.168.67.0/24 created
	I1025 21:30:30.336950   17998 kic.go:106] calculated static IP "192.168.67.2" for the "calico-205231" container
	I1025 21:30:30.337030   17998 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:30:30.398992   17998 cli_runner.go:164] Run: docker volume create calico-205231 --label name.minikube.sigs.k8s.io=calico-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:30:30.459851   17998 oci.go:103] Successfully created a docker volume calico-205231
	I1025 21:30:30.459956   17998 cli_runner.go:164] Run: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:30:30.680283   17998 cli_runner.go:211] docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:30:30.680322   17998 client.go:171] LocalClient.Create took 743.684067ms
	I1025 21:30:32.682766   17998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:30:32.682925   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:32.746844   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:32.746934   17998 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:33.024793   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:33.090412   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:33.090503   17998 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:33.633004   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:33.699385   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:33.699464   17998 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:34.355399   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:34.420607   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	W1025 21:30:34.420694   17998 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	
	W1025 21:30:34.420709   17998 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:34.420761   17998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:30:34.420805   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:34.481591   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:34.481666   17998 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:34.715150   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:34.781246   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:34.781330   17998 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:35.228319   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:35.293591   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:35.293672   17998 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:35.614229   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:35.679141   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:30:35.679231   17998 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:36.234870   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:30:36.299282   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	W1025 21:30:36.299377   17998 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	
	W1025 21:30:36.299393   17998 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:36.299400   17998 start.go:128] duration metric: createHost completed in 6.406610838s
	I1025 21:30:36.299410   17998 start.go:83] releasing machines lock for "calico-205231", held for 6.406704985s
	W1025 21:30:36.299424   17998 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:30:36.299789   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:36.361145   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:36.361189   17998 delete.go:82] Unable to get host status for calico-205231, assuming it has already been deleted: state: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	W1025 21:30:36.361331   17998 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:30:36.361342   17998 start.go:618] Will try again in 5 seconds ...
	I1025 21:30:41.363708   17998 start.go:364] acquiring machines lock for calico-205231: {Name:mk7076ab2a8012cf6a913e687e3fcf4a772b25f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:30:41.363872   17998 start.go:368] acquired machines lock for "calico-205231" in 130.125µs
	I1025 21:30:41.363900   17998 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:30:41.363914   17998 fix.go:55] fixHost starting: 
	I1025 21:30:41.364293   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:41.427824   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:41.427872   17998 fix.go:103] recreateIfNeeded on calico-205231: state= err=unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:41.427892   17998 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:30:41.449649   17998 out.go:177] * docker "calico-205231" container is missing, will recreate.
	I1025 21:30:41.471560   17998 delete.go:124] DEMOLISHING calico-205231 ...
	I1025 21:30:41.471785   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:41.533336   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:30:41.533375   17998 stop.go:75] unable to get state: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:41.533394   17998 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:41.533710   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:41.593837   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:41.593875   17998 delete.go:82] Unable to get host status for calico-205231, assuming it has already been deleted: state: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:41.593952   17998 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-205231
	W1025 21:30:41.654828   17998 cli_runner.go:211] docker container inspect -f {{.Id}} calico-205231 returned with exit code 1
	I1025 21:30:41.654854   17998 kic.go:356] could not find the container calico-205231 to remove it. will try anyways
	I1025 21:30:41.654941   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:41.715000   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:30:41.715038   17998 oci.go:84] error getting container status, will try to delete anyways: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:41.715101   17998 cli_runner.go:164] Run: docker exec --privileged -t calico-205231 /bin/bash -c "sudo init 0"
	W1025 21:30:41.774898   17998 cli_runner.go:211] docker exec --privileged -t calico-205231 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:30:41.774929   17998 oci.go:646] error shutdown calico-205231: docker exec --privileged -t calico-205231 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:42.775231   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:42.957228   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:42.957291   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:42.957305   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:42.957348   17998 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:43.357967   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:43.472786   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:43.472827   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:43.472836   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:43.472853   17998 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:44.234906   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:44.301977   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:44.302013   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:44.302020   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:44.302039   17998 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:45.782295   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:45.845088   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:45.845133   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:45.845141   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:45.845162   17998 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:47.051411   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:47.117707   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:47.117747   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:47.117754   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:47.117773   17998 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:49.349256   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:49.415151   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:49.415190   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:49.415199   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:49.415217   17998 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:52.523361   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:52.587760   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:52.587800   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:52.587807   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:52.587825   17998 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:58.108417   17998 cli_runner.go:164] Run: docker container inspect calico-205231 --format={{.State.Status}}
	W1025 21:30:58.176136   17998 cli_runner.go:211] docker container inspect calico-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:30:58.176190   17998 oci.go:658] temporary error verifying shutdown: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:30:58.176201   17998 oci.go:660] temporary error: container calico-205231 status is  but expect it to be exited
	I1025 21:30:58.176224   17998 oci.go:88] couldn't shut down calico-205231 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-205231": docker container inspect calico-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	 
	I1025 21:30:58.176287   17998 cli_runner.go:164] Run: docker rm -f -v calico-205231
	I1025 21:30:58.303616   17998 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-205231
	W1025 21:30:58.364284   17998 cli_runner.go:211] docker container inspect -f {{.Id}} calico-205231 returned with exit code 1
	I1025 21:30:58.364408   17998 cli_runner.go:164] Run: docker network inspect calico-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:58.424653   17998 cli_runner.go:164] Run: docker network rm calico-205231
	W1025 21:30:58.536980   17998 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:30:58.537000   17998 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:30:59.537850   17998 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:30:59.560215   17998 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:30:59.560348   17998 start.go:159] libmachine.API.Create for "calico-205231" (driver="docker")
	I1025 21:30:59.560377   17998 client.go:168] LocalClient.Create starting
	I1025 21:30:59.560584   17998 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:30:59.560670   17998 main.go:134] libmachine: Decoding PEM data...
	I1025 21:30:59.560690   17998 main.go:134] libmachine: Parsing certificate...
	I1025 21:30:59.560784   17998 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:30:59.560836   17998 main.go:134] libmachine: Decoding PEM data...
	I1025 21:30:59.560860   17998 main.go:134] libmachine: Parsing certificate...
	I1025 21:30:59.582296   17998 cli_runner.go:164] Run: docker network inspect calico-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:30:59.644799   17998 cli_runner.go:211] docker network inspect calico-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:30:59.644865   17998 network_create.go:272] running [docker network inspect calico-205231] to gather additional debugging logs...
	I1025 21:30:59.644884   17998 cli_runner.go:164] Run: docker network inspect calico-205231
	W1025 21:30:59.706629   17998 cli_runner.go:211] docker network inspect calico-205231 returned with exit code 1
	I1025 21:30:59.706665   17998 network_create.go:275] error running [docker network inspect calico-205231]: docker network inspect calico-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-205231
	I1025 21:30:59.706685   17998 network_create.go:277] output of [docker network inspect calico-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-205231
	
	** /stderr **
	I1025 21:30:59.706776   17998 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:59.768258   17998 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640 192.168.67.0:0xc000c241d8] misses:1}
	I1025 21:30:59.768285   17998 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:59.768517   17998 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640 192.168.67.0:0xc000c241d8] misses:2}
	I1025 21:30:59.768527   17998 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:59.768721   17998 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640 192.168.67.0:0xc000c241d8] amended:false}} dirty:map[] misses:0}
	I1025 21:30:59.768730   17998 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:59.768929   17998 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640 192.168.67.0:0xc000c241d8] amended:true}} dirty:map[192.168.49.0:0xc00011c7b8 192.168.58.0:0xc00089e640 192.168.67.0:0xc000c241d8 192.168.76.0:0xc00011c1a0] misses:0}
	I1025 21:30:59.768946   17998 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:30:59.768952   17998 network_create.go:115] attempt to create docker network calico-205231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:30:59.769022   17998 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-205231 calico-205231
	I1025 21:30:59.860387   17998 network_create.go:99] docker network calico-205231 192.168.76.0/24 created
	I1025 21:30:59.860423   17998 kic.go:106] calculated static IP "192.168.76.2" for the "calico-205231" container
	I1025 21:30:59.860556   17998 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:30:59.922393   17998 cli_runner.go:164] Run: docker volume create calico-205231 --label name.minikube.sigs.k8s.io=calico-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:30:59.983031   17998 oci.go:103] Successfully created a docker volume calico-205231
	I1025 21:30:59.983163   17998 cli_runner.go:164] Run: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:31:00.123900   17998 cli_runner.go:211] docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:31:00.123951   17998 client.go:171] LocalClient.Create took 563.54494ms
	I1025 21:31:02.124403   17998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:02.124534   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:02.186676   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:02.186761   17998 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:02.387384   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:02.452129   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:02.452221   17998 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:02.896770   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:02.958873   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:02.958964   17998 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:03.364257   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:03.427417   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:03.427498   17998 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:04.023207   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:04.090201   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	W1025 21:31:04.090290   17998 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	
	W1025 21:31:04.090307   17998 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:04.090368   17998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:04.090412   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:04.151225   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:04.151302   17998 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:04.421320   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:04.483128   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:04.483218   17998 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:04.996389   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:05.061475   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:05.061558   17998 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:05.509572   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:05.573214   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	W1025 21:31:05.573329   17998 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	
	W1025 21:31:05.573346   17998 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:05.573357   17998 start.go:128] duration metric: createHost completed in 6.035447936s
	I1025 21:31:05.573420   17998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:05.573466   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:05.635490   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:05.635561   17998 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:05.949944   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:06.016613   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:06.016691   17998 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:06.284150   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:06.348050   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:06.348170   17998 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:07.116392   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:07.177923   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	W1025 21:31:07.178006   17998 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	
	W1025 21:31:07.178021   17998 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:07.178083   17998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:07.178144   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:07.239338   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:07.239414   17998 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:07.497757   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:07.561416   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:07.561505   17998 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:07.760695   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:07.824115   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	I1025 21:31:07.824197   17998 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:08.196826   17998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231
	W1025 21:31:08.306285   17998 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231 returned with exit code 1
	W1025 21:31:08.306404   17998 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	
	W1025 21:31:08.306431   17998 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-205231
	I1025 21:31:08.306446   17998 fix.go:57] fixHost completed within 26.942301927s
	I1025 21:31:08.306460   17998 start.go:83] releasing machines lock for "calico-205231", held for 26.942345998s
	W1025 21:31:08.306656   17998 out.go:239] * Failed to start docker container. Running "minikube delete -p calico-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p calico-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:31:08.350232   17998 out.go:177] 
	W1025 21:31:08.388277   17998 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for calico-205231 container: docker run --rm --name calico-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-205231 --entrypoint /usr/bin/test -v calico-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:31:08.388313   17998 out.go:239] * 
	* 
	W1025 21:31:08.389799   17998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:31:08.476113   17998 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (39.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (39.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E1025 21:31:35.554101    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-205231 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : exit status 80 (39.432629385s)

                                                
                                                
-- stdout --
	* [false-205231] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node false-205231 in cluster false-205231
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-205231" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:31:09.586897   18324 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:31:09.587057   18324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:31:09.587062   18324 out.go:309] Setting ErrFile to fd 2...
	I1025 21:31:09.587066   18324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:31:09.587190   18324 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:31:09.587729   18324 out.go:303] Setting JSON to false
	I1025 21:31:09.602422   18324 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5438,"bootTime":1666753231,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:31:09.602514   18324 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:31:09.623905   18324 out.go:177] * [false-205231] minikube v1.27.1 on Darwin 12.6
	I1025 21:31:09.666124   18324 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:31:09.666131   18324 notify.go:220] Checking for updates...
	I1025 21:31:09.709701   18324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:31:09.730714   18324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:31:09.752005   18324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:31:09.773943   18324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:31:09.796714   18324 config.go:180] Loaded profile config "cert-expiration-212703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:31:09.796853   18324 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:31:09.796939   18324 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:31:09.864900   18324 docker.go:137] docker version: linux-20.10.17
	I1025 21:31:09.865064   18324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:31:09.993364   18324 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:31:09.923743637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:31:10.036914   18324 out.go:177] * Using the docker driver based on user configuration
	I1025 21:31:10.058004   18324 start.go:282] selected driver: docker
	I1025 21:31:10.058031   18324 start.go:808] validating driver "docker" against <nil>
	I1025 21:31:10.058056   18324 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:31:10.061438   18324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:31:10.188289   18324 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:31:10.119461363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:31:10.188382   18324 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:31:10.188519   18324 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:31:10.211321   18324 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:31:10.233143   18324 cni.go:95] Creating CNI manager for "false"
	I1025 21:31:10.233176   18324 start_flags.go:317] config:
	{Name:false-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-205231 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:31:10.255166   18324 out.go:177] * Starting control plane node false-205231 in cluster false-205231
	I1025 21:31:10.298811   18324 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:31:10.320139   18324 out.go:177] * Pulling base image ...
	I1025 21:31:10.362178   18324 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:31:10.362203   18324 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:31:10.362251   18324 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:31:10.362271   18324 cache.go:57] Caching tarball of preloaded images
	I1025 21:31:10.362499   18324 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:31:10.362515   18324 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:31:10.363487   18324 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/false-205231/config.json ...
	I1025 21:31:10.363604   18324 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/false-205231/config.json: {Name:mkc14bd5ca46a5f4a092a1cbdcc1532725c92cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:31:10.426725   18324 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:31:10.426743   18324 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:31:10.426753   18324 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:31:10.426809   18324 start.go:364] acquiring machines lock for false-205231: {Name:mkbc91d52b6aedbfa50d933903b1b6efbfe2fa8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:31:10.426954   18324 start.go:368] acquired machines lock for "false-205231" in 133.201µs
	I1025 21:31:10.426993   18324 start.go:93] Provisioning new machine with config: &{Name:false-205231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-205231 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:31:10.427068   18324 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:31:10.470566   18324 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:31:10.471071   18324 start.go:159] libmachine.API.Create for "false-205231" (driver="docker")
	I1025 21:31:10.471111   18324 client.go:168] LocalClient.Create starting
	I1025 21:31:10.471260   18324 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:31:10.471329   18324 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:10.471354   18324 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:10.471450   18324 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:31:10.471494   18324 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:10.471516   18324 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:10.472325   18324 cli_runner.go:164] Run: docker network inspect false-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:31:10.534403   18324 cli_runner.go:211] docker network inspect false-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:31:10.534490   18324 network_create.go:272] running [docker network inspect false-205231] to gather additional debugging logs...
	I1025 21:31:10.534513   18324 cli_runner.go:164] Run: docker network inspect false-205231
	W1025 21:31:10.595858   18324 cli_runner.go:211] docker network inspect false-205231 returned with exit code 1
	I1025 21:31:10.595882   18324 network_create.go:275] error running [docker network inspect false-205231]: docker network inspect false-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-205231
	I1025 21:31:10.595895   18324 network_create.go:277] output of [docker network inspect false-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-205231
	
	** /stderr **
	I1025 21:31:10.595984   18324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:31:10.657258   18324 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000d9a558] misses:0}
	I1025 21:31:10.657345   18324 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.657372   18324 network_create.go:115] attempt to create docker network false-205231 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:31:10.657755   18324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231
	W1025 21:31:10.718104   18324 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231 returned with exit code 1
	W1025 21:31:10.718140   18324 network_create.go:107] failed to create docker network false-205231 192.168.49.0/24, will retry: subnet is taken
	I1025 21:31:10.718388   18324 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:false}} dirty:map[] misses:0}
	I1025 21:31:10.718403   18324 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.718600   18324 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028] misses:0}
	I1025 21:31:10.718613   18324 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.718627   18324 network_create.go:115] attempt to create docker network false-205231 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:31:10.718694   18324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231
	W1025 21:31:10.779046   18324 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231 returned with exit code 1
	W1025 21:31:10.779087   18324 network_create.go:107] failed to create docker network false-205231 192.168.58.0/24, will retry: subnet is taken
	I1025 21:31:10.779331   18324 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028] misses:1}
	I1025 21:31:10.779346   18324 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.779553   18324 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8] misses:1}
	I1025 21:31:10.779565   18324 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.779575   18324 network_create.go:115] attempt to create docker network false-205231 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:31:10.779645   18324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231
	W1025 21:31:10.840519   18324 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231 returned with exit code 1
	W1025 21:31:10.840561   18324 network_create.go:107] failed to create docker network false-205231 192.168.67.0/24, will retry: subnet is taken
	I1025 21:31:10.840840   18324 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8] misses:2}
	I1025 21:31:10.840856   18324 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.841057   18324 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] misses:2}
	I1025 21:31:10.841070   18324 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:10.841076   18324 network_create.go:115] attempt to create docker network false-205231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:31:10.841142   18324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231
	I1025 21:31:10.931823   18324 network_create.go:99] docker network false-205231 192.168.76.0/24 created
	I1025 21:31:10.931856   18324 kic.go:106] calculated static IP "192.168.76.2" for the "false-205231" container
	I1025 21:31:10.931978   18324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:31:10.995861   18324 cli_runner.go:164] Run: docker volume create false-205231 --label name.minikube.sigs.k8s.io=false-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:31:11.057408   18324 oci.go:103] Successfully created a docker volume false-205231
	I1025 21:31:11.057511   18324 cli_runner.go:164] Run: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:31:11.275566   18324 cli_runner.go:211] docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:31:11.275638   18324 client.go:171] LocalClient.Create took 804.512492ms
	I1025 21:31:13.277616   18324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:13.277752   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:13.341065   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:13.341162   18324 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:13.618405   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:13.684790   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:13.684864   18324 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:14.227538   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:14.292359   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:14.292447   18324 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:14.947763   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:15.008776   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	W1025 21:31:15.008875   18324 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	
	W1025 21:31:15.008897   18324 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:15.008974   18324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:15.009018   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:15.069469   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:15.069555   18324 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:15.303002   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:15.368606   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:15.368684   18324 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:15.816208   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:15.883158   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:15.883242   18324 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:16.203821   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:16.270640   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:16.270731   18324 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:16.827101   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:16.890870   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	W1025 21:31:16.890972   18324 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	
	W1025 21:31:16.890990   18324 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:16.890999   18324 start.go:128] duration metric: createHost completed in 6.46389628s
	I1025 21:31:16.891007   18324 start.go:83] releasing machines lock for "false-205231", held for 6.464015683s
	W1025 21:31:16.891021   18324 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:31:16.891421   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:16.952366   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:16.952406   18324 delete.go:82] Unable to get host status for false-205231, assuming it has already been deleted: state: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	W1025 21:31:16.952562   18324 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:31:16.952576   18324 start.go:618] Will try again in 5 seconds ...
	I1025 21:31:21.954876   18324 start.go:364] acquiring machines lock for false-205231: {Name:mkbc91d52b6aedbfa50d933903b1b6efbfe2fa8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:31:21.955040   18324 start.go:368] acquired machines lock for "false-205231" in 121.115µs
	I1025 21:31:21.955081   18324 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:31:21.955096   18324 fix.go:55] fixHost starting: 
	I1025 21:31:21.955489   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:22.020624   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:22.020669   18324 fix.go:103] recreateIfNeeded on false-205231: state= err=unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:22.020684   18324 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:31:22.064184   18324 out.go:177] * docker "false-205231" container is missing, will recreate.
	I1025 21:31:22.086199   18324 delete.go:124] DEMOLISHING false-205231 ...
	I1025 21:31:22.086440   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:22.148163   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:31:22.148202   18324 stop.go:75] unable to get state: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:22.148214   18324 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:22.148558   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:22.209255   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:22.209298   18324 delete.go:82] Unable to get host status for false-205231, assuming it has already been deleted: state: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:22.209410   18324 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-205231
	W1025 21:31:22.270148   18324 cli_runner.go:211] docker container inspect -f {{.Id}} false-205231 returned with exit code 1
	I1025 21:31:22.270175   18324 kic.go:356] could not find the container false-205231 to remove it. will try anyways
	I1025 21:31:22.270279   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:22.330616   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	W1025 21:31:22.330661   18324 oci.go:84] error getting container status, will try to delete anyways: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:22.330755   18324 cli_runner.go:164] Run: docker exec --privileged -t false-205231 /bin/bash -c "sudo init 0"
	W1025 21:31:22.390963   18324 cli_runner.go:211] docker exec --privileged -t false-205231 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:31:22.390998   18324 oci.go:646] error shutdown false-205231: docker exec --privileged -t false-205231 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:23.391420   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:23.455609   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:23.455651   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:23.455659   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:23.455677   18324 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:23.858501   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:23.921429   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:23.921469   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:23.921479   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:23.921498   18324 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:24.685294   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:24.748891   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:24.748930   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:24.748945   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:24.748988   18324 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:26.229216   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:26.294435   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:26.294473   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:26.294481   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:26.294499   18324 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:27.500723   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:27.564794   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:27.564851   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:27.564858   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:27.564876   18324 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:29.796470   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:29.861034   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:29.861075   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:29.861083   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:29.861102   18324 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:32.969531   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:33.033102   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:33.033152   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:33.033159   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:33.033184   18324 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:38.552441   18324 cli_runner.go:164] Run: docker container inspect false-205231 --format={{.State.Status}}
	W1025 21:31:38.618833   18324 cli_runner.go:211] docker container inspect false-205231 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:38.618885   18324 oci.go:658] temporary error verifying shutdown: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:38.618898   18324 oci.go:660] temporary error: container false-205231 status is  but expect it to be exited
	I1025 21:31:38.618940   18324 oci.go:88] couldn't shut down false-205231 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-205231": docker container inspect false-205231 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	 
	I1025 21:31:38.619027   18324 cli_runner.go:164] Run: docker rm -f -v false-205231
	I1025 21:31:38.683443   18324 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-205231
	W1025 21:31:38.744305   18324 cli_runner.go:211] docker container inspect -f {{.Id}} false-205231 returned with exit code 1
	I1025 21:31:38.744440   18324 cli_runner.go:164] Run: docker network inspect false-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:31:38.805663   18324 cli_runner.go:164] Run: docker network rm false-205231
	W1025 21:31:38.912458   18324 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:31:38.912477   18324 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:31:39.914563   18324 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:31:39.936568   18324 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:31:39.936673   18324 start.go:159] libmachine.API.Create for "false-205231" (driver="docker")
	I1025 21:31:39.936724   18324 client.go:168] LocalClient.Create starting
	I1025 21:31:39.936828   18324 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:31:39.936883   18324 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:39.936895   18324 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:39.936943   18324 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:31:39.936965   18324 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:39.936977   18324 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:39.957437   18324 cli_runner.go:164] Run: docker network inspect false-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:31:40.019022   18324 cli_runner.go:211] docker network inspect false-205231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:31:40.019100   18324 network_create.go:272] running [docker network inspect false-205231] to gather additional debugging logs...
	I1025 21:31:40.019118   18324 cli_runner.go:164] Run: docker network inspect false-205231
	W1025 21:31:40.083873   18324 cli_runner.go:211] docker network inspect false-205231 returned with exit code 1
	I1025 21:31:40.083896   18324 network_create.go:275] error running [docker network inspect false-205231]: docker network inspect false-205231: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-205231
	I1025 21:31:40.083912   18324 network_create.go:277] output of [docker network inspect false-205231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-205231
	
	** /stderr **
	I1025 21:31:40.083982   18324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:31:40.146055   18324 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] misses:2}
	I1025 21:31:40.146086   18324 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.146317   18324 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] misses:3}
	I1025 21:31:40.146328   18324 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.146522   18324 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] amended:false}} dirty:map[] misses:0}
	I1025 21:31:40.146530   18324 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.146711   18324 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] amended:false}} dirty:map[] misses:0}
	I1025 21:31:40.146718   18324 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.146949   18324 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70 192.168.85.0:0xc0004a5128] misses:0}
	I1025 21:31:40.146975   18324 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.146987   18324 network_create.go:115] attempt to create docker network false-205231 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:31:40.147058   18324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231
	W1025 21:31:40.209939   18324 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231 returned with exit code 1
	W1025 21:31:40.209972   18324 network_create.go:107] failed to create docker network false-205231 192.168.85.0/24, will retry: subnet is taken
	I1025 21:31:40.210253   18324 network.go:286] skipping subnet 192.168.85.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70 192.168.85.0:0xc0004a5128] misses:1}
	I1025 21:31:40.210276   18324 network.go:244] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.210469   18324 network.go:295] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70] amended:true}} dirty:map[192.168.49.0:0xc000d9a558 192.168.58.0:0xc0005a4028 192.168.67.0:0xc0009960b8 192.168.76.0:0xc0004a4e70 192.168.85.0:0xc0004a5128 192.168.94.0:0xc000d9a2f0] misses:1}
	I1025 21:31:40.210503   18324 network.go:241] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:40.210511   18324 network_create.go:115] attempt to create docker network false-205231 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 21:31:40.210569   18324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-205231 false-205231
	I1025 21:31:40.302211   18324 network_create.go:99] docker network false-205231 192.168.94.0/24 created
	I1025 21:31:40.302241   18324 kic.go:106] calculated static IP "192.168.94.2" for the "false-205231" container
	I1025 21:31:40.302329   18324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:31:40.369247   18324 cli_runner.go:164] Run: docker volume create false-205231 --label name.minikube.sigs.k8s.io=false-205231 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:31:40.430033   18324 oci.go:103] Successfully created a docker volume false-205231
	I1025 21:31:40.430148   18324 cli_runner.go:164] Run: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:31:40.571823   18324 cli_runner.go:211] docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:31:40.571869   18324 client.go:171] LocalClient.Create took 635.137903ms
	I1025 21:31:42.574159   18324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:42.574256   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:42.635786   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:42.635866   18324 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:42.836474   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:42.899296   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:42.899382   18324 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:43.341860   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:43.408130   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:43.408211   18324 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:43.814786   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:43.880317   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:43.880397   18324 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:44.473975   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:44.584281   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	W1025 21:31:44.584381   18324 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	
	W1025 21:31:44.584429   18324 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:44.584548   18324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:44.584599   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:44.645432   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:44.645530   18324 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:44.914129   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:44.978790   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:44.978883   18324 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:45.491678   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:45.591970   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:45.592102   18324 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:46.038505   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:46.123561   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	W1025 21:31:46.123666   18324 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	
	W1025 21:31:46.123682   18324 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:46.123698   18324 start.go:128] duration metric: createHost completed in 6.209096918s
	I1025 21:31:46.123754   18324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:46.123793   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:46.185966   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:46.186054   18324 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:46.499848   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:46.562551   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:46.562631   18324 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:46.828039   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:46.890131   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:46.890230   18324 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:47.658557   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:47.721779   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	W1025 21:31:47.721869   18324 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	
	W1025 21:31:47.721885   18324 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:47.721943   18324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:47.721990   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:47.782438   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:47.782512   18324 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:48.038840   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:48.105777   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:48.105852   18324 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:48.306351   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:48.372857   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	I1025 21:31:48.372943   18324 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:48.745662   18324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231
	W1025 21:31:48.811238   18324 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231 returned with exit code 1
	W1025 21:31:48.811323   18324 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	
	W1025 21:31:48.811338   18324 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-205231": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-205231: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-205231
	I1025 21:31:48.811348   18324 fix.go:57] fixHost completed within 26.856156667s
	I1025 21:31:48.811358   18324 start.go:83] releasing machines lock for "false-205231", held for 26.856209197s
	W1025 21:31:48.811519   18324 out.go:239] * Failed to start docker container. Running "minikube delete -p false-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p false-205231" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:31:48.854053   18324 out.go:177] 
	W1025 21:31:48.880359   18324 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for false-205231 container: docker run --rm --name false-205231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-205231 --entrypoint /usr/bin/test -v false-205231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:31:48.880397   18324 out.go:239] * 
	* 
	W1025 21:31:48.881590   18324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:31:48.946189   18324 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (39.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p bridge-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : exit status 80 (39.284674712s)

                                                
                                                
-- stdout --
	* [bridge-205230] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node bridge-205230 in cluster bridge-205230
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-205230" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:31:45.641138   18598 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:31:45.641293   18598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:31:45.641298   18598 out.go:309] Setting ErrFile to fd 2...
	I1025 21:31:45.641304   18598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:31:45.641422   18598 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:31:45.641913   18598 out.go:303] Setting JSON to false
	I1025 21:31:45.656571   18598 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5474,"bootTime":1666753231,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:31:45.656678   18598 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:31:45.678629   18598 out.go:177] * [bridge-205230] minikube v1.27.1 on Darwin 12.6
	I1025 21:31:45.722688   18598 notify.go:220] Checking for updates...
	I1025 21:31:45.744415   18598 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:31:45.766242   18598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:31:45.787629   18598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:31:45.809592   18598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:31:45.831396   18598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:31:45.854123   18598 config.go:180] Loaded profile config "false-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:31:45.854277   18598 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:31:45.854366   18598 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:31:45.921427   18598 docker.go:137] docker version: linux-20.10.17
	I1025 21:31:45.921557   18598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:31:46.049946   18598 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:31:45.983523187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:31:46.092390   18598 out.go:177] * Using the docker driver based on user configuration
	I1025 21:31:46.113615   18598 start.go:282] selected driver: docker
	I1025 21:31:46.113643   18598 start.go:808] validating driver "docker" against <nil>
	I1025 21:31:46.113667   18598 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:31:46.117386   18598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:31:46.248169   18598 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:31:46.181734995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:31:46.248261   18598 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:31:46.248394   18598 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:31:46.270120   18598 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:31:46.291001   18598 cni.go:95] Creating CNI manager for "bridge"
	I1025 21:31:46.291033   18598 start_flags.go:312] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 21:31:46.291054   18598 start_flags.go:317] config:
	{Name:bridge-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:bridge-205230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:31:46.312869   18598 out.go:177] * Starting control plane node bridge-205230 in cluster bridge-205230
	I1025 21:31:46.355129   18598 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:31:46.376828   18598 out.go:177] * Pulling base image ...
	I1025 21:31:46.399057   18598 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:31:46.399072   18598 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:31:46.399126   18598 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:31:46.399144   18598 cache.go:57] Caching tarball of preloaded images
	I1025 21:31:46.399316   18598 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:31:46.399334   18598 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:31:46.400305   18598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/bridge-205230/config.json ...
	I1025 21:31:46.400422   18598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/bridge-205230/config.json: {Name:mk0e5810bc3e9b26ab939426b6bb878608320e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:31:46.463040   18598 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:31:46.463059   18598 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:31:46.463069   18598 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:31:46.463134   18598 start.go:364] acquiring machines lock for bridge-205230: {Name:mk2f0677100959a898cd1024a8b66bc930ad4386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:31:46.463283   18598 start.go:368] acquired machines lock for "bridge-205230" in 137.316µs
	I1025 21:31:46.463308   18598 start.go:93] Provisioning new machine with config: &{Name:bridge-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:bridge-205230 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:31:46.463390   18598 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:31:46.506657   18598 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:31:46.506855   18598 start.go:159] libmachine.API.Create for "bridge-205230" (driver="docker")
	I1025 21:31:46.506876   18598 client.go:168] LocalClient.Create starting
	I1025 21:31:46.506934   18598 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:31:46.506967   18598 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:46.506980   18598 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:46.507034   18598 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:31:46.507056   18598 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:46.507071   18598 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:46.507503   18598 cli_runner.go:164] Run: docker network inspect bridge-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:31:46.569002   18598 cli_runner.go:211] docker network inspect bridge-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:31:46.569091   18598 network_create.go:272] running [docker network inspect bridge-205230] to gather additional debugging logs...
	I1025 21:31:46.569104   18598 cli_runner.go:164] Run: docker network inspect bridge-205230
	W1025 21:31:46.630171   18598 cli_runner.go:211] docker network inspect bridge-205230 returned with exit code 1
	I1025 21:31:46.630194   18598 network_create.go:275] error running [docker network inspect bridge-205230]: docker network inspect bridge-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-205230
	I1025 21:31:46.630216   18598 network_create.go:277] output of [docker network inspect bridge-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-205230
	
	** /stderr **
	I1025 21:31:46.630296   18598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:31:46.691390   18598 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8] misses:0}
	I1025 21:31:46.691426   18598 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:46.691440   18598 network_create.go:115] attempt to create docker network bridge-205230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:31:46.691513   18598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230
	W1025 21:31:46.751751   18598 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230 returned with exit code 1
	W1025 21:31:46.751784   18598 network_create.go:107] failed to create docker network bridge-205230 192.168.49.0/24, will retry: subnet is taken
	I1025 21:31:46.752091   18598 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8] amended:false}} dirty:map[] misses:0}
	I1025 21:31:46.752107   18598 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:46.752315   18598 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80] misses:0}
	I1025 21:31:46.752328   18598 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:46.752336   18598 network_create.go:115] attempt to create docker network bridge-205230 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:31:46.752405   18598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230
	W1025 21:31:46.814284   18598 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230 returned with exit code 1
	W1025 21:31:46.814342   18598 network_create.go:107] failed to create docker network bridge-205230 192.168.58.0/24, will retry: subnet is taken
	I1025 21:31:46.814625   18598 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80] misses:1}
	I1025 21:31:46.814657   18598 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:46.814866   18598 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] misses:1}
	I1025 21:31:46.814878   18598 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:46.814886   18598 network_create.go:115] attempt to create docker network bridge-205230 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:31:46.814953   18598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230
	I1025 21:31:46.910591   18598 network_create.go:99] docker network bridge-205230 192.168.67.0/24 created
	I1025 21:31:46.910622   18598 kic.go:106] calculated static IP "192.168.67.2" for the "bridge-205230" container
	I1025 21:31:46.910723   18598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:31:46.972319   18598 cli_runner.go:164] Run: docker volume create bridge-205230 --label name.minikube.sigs.k8s.io=bridge-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:31:47.033496   18598 oci.go:103] Successfully created a docker volume bridge-205230
	I1025 21:31:47.033623   18598 cli_runner.go:164] Run: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:31:47.260827   18598 cli_runner.go:211] docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:31:47.260867   18598 client.go:171] LocalClient.Create took 753.981945ms
	I1025 21:31:49.261054   18598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:49.261149   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:49.322681   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:49.322781   18598 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:49.601082   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:49.662654   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:49.662749   18598 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:50.203248   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:50.264347   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:50.264437   18598 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:50.921650   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:50.983074   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	W1025 21:31:50.983160   18598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	
	W1025 21:31:50.983181   18598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:50.983232   18598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:50.983293   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:51.044771   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:51.044843   18598 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:51.276378   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:51.338904   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:51.338994   18598 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:51.786466   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:51.850438   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:51.850535   18598 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:52.171107   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:52.234532   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:31:52.234621   18598 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:52.790965   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:31:52.854523   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	W1025 21:31:52.854608   18598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	
	W1025 21:31:52.854637   18598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:52.854648   18598 start.go:128] duration metric: createHost completed in 6.391232438s
	I1025 21:31:52.854658   18598 start.go:83] releasing machines lock for "bridge-205230", held for 6.391346589s
	W1025 21:31:52.854672   18598 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:31:52.855089   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:52.915122   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:52.915172   18598 delete.go:82] Unable to get host status for bridge-205230, assuming it has already been deleted: state: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	W1025 21:31:52.915343   18598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:31:52.915354   18598 start.go:618] Will try again in 5 seconds ...
	I1025 21:31:57.916458   18598 start.go:364] acquiring machines lock for bridge-205230: {Name:mk2f0677100959a898cd1024a8b66bc930ad4386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:31:57.916604   18598 start.go:368] acquired machines lock for "bridge-205230" in 114.207µs
	I1025 21:31:57.916633   18598 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:31:57.916649   18598 fix.go:55] fixHost starting: 
	I1025 21:31:57.917012   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:57.983951   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:57.983993   18598 fix.go:103] recreateIfNeeded on bridge-205230: state= err=unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:57.984021   18598 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:31:58.006004   18598 out.go:177] * docker "bridge-205230" container is missing, will recreate.
	I1025 21:31:58.050885   18598 delete.go:124] DEMOLISHING bridge-205230 ...
	I1025 21:31:58.051097   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:58.113493   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	W1025 21:31:58.113531   18598 stop.go:75] unable to get state: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:58.113542   18598 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:58.113950   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:58.174461   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:58.174514   18598 delete.go:82] Unable to get host status for bridge-205230, assuming it has already been deleted: state: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:58.174590   18598 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-205230
	W1025 21:31:58.234869   18598 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-205230 returned with exit code 1
	I1025 21:31:58.234894   18598 kic.go:356] could not find the container bridge-205230 to remove it. will try anyways
	I1025 21:31:58.234971   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:58.296571   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	W1025 21:31:58.296625   18598 oci.go:84] error getting container status, will try to delete anyways: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:58.296704   18598 cli_runner.go:164] Run: docker exec --privileged -t bridge-205230 /bin/bash -c "sudo init 0"
	W1025 21:31:58.356865   18598 cli_runner.go:211] docker exec --privileged -t bridge-205230 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:31:58.356890   18598 oci.go:646] error shutdown bridge-205230: docker exec --privileged -t bridge-205230 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:59.358342   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:59.421960   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:59.422001   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:59.422008   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:31:59.422025   18598 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:59.822855   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:31:59.888392   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:59.888451   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:31:59.888464   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:31:59.888507   18598 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:00.652303   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:32:00.728349   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:00.728389   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:00.728395   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:32:00.728413   18598 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:02.206540   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:32:02.274458   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:02.274510   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:02.274517   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:32:02.274537   18598 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:03.482279   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:32:03.544783   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:03.544823   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:03.544830   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:32:03.544850   18598 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:05.776337   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:32:05.839030   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:05.839070   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:05.839083   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:32:05.839109   18598 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:08.947590   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:32:09.013372   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:09.013415   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:09.013421   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:32:09.013457   18598 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:14.531908   18598 cli_runner.go:164] Run: docker container inspect bridge-205230 --format={{.State.Status}}
	W1025 21:32:14.597462   18598 cli_runner.go:211] docker container inspect bridge-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:14.597499   18598 oci.go:658] temporary error verifying shutdown: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:14.597505   18598 oci.go:660] temporary error: container bridge-205230 status is  but expect it to be exited
	I1025 21:32:14.597544   18598 oci.go:88] couldn't shut down bridge-205230 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-205230": docker container inspect bridge-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	 
	I1025 21:32:14.597608   18598 cli_runner.go:164] Run: docker rm -f -v bridge-205230
	I1025 21:32:14.661198   18598 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-205230
	W1025 21:32:14.722359   18598 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-205230 returned with exit code 1
	I1025 21:32:14.722464   18598 cli_runner.go:164] Run: docker network inspect bridge-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:14.783223   18598 cli_runner.go:164] Run: docker network rm bridge-205230
	W1025 21:32:14.898759   18598 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:32:14.898777   18598 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:32:15.898896   18598 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:32:15.921279   18598 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:32:15.921434   18598 start.go:159] libmachine.API.Create for "bridge-205230" (driver="docker")
	I1025 21:32:15.921496   18598 client.go:168] LocalClient.Create starting
	I1025 21:32:15.921623   18598 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:32:15.921733   18598 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:15.921752   18598 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:15.921830   18598 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:32:15.921884   18598 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:15.921922   18598 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:15.943293   18598 cli_runner.go:164] Run: docker network inspect bridge-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:32:16.009313   18598 cli_runner.go:211] docker network inspect bridge-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:32:16.009406   18598 network_create.go:272] running [docker network inspect bridge-205230] to gather additional debugging logs...
	I1025 21:32:16.009421   18598 cli_runner.go:164] Run: docker network inspect bridge-205230
	W1025 21:32:16.070192   18598 cli_runner.go:211] docker network inspect bridge-205230 returned with exit code 1
	I1025 21:32:16.070225   18598 network_create.go:275] error running [docker network inspect bridge-205230]: docker network inspect bridge-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-205230
	I1025 21:32:16.070241   18598 network_create.go:277] output of [docker network inspect bridge-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-205230
	
	** /stderr **
	I1025 21:32:16.070322   18598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:16.132063   18598 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] misses:1}
	I1025 21:32:16.132091   18598 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:16.132328   18598 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] misses:2}
	I1025 21:32:16.132338   18598 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:16.132545   18598 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] amended:false}} dirty:map[] misses:0}
	I1025 21:32:16.132554   18598 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:16.132747   18598 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460 192.168.76.0:0xc000bb12b8] misses:0}
	I1025 21:32:16.132763   18598 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:16.132772   18598 network_create.go:115] attempt to create docker network bridge-205230 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:32:16.132844   18598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230
	W1025 21:32:16.194765   18598 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230 returned with exit code 1
	W1025 21:32:16.194818   18598 network_create.go:107] failed to create docker network bridge-205230 192.168.76.0/24, will retry: subnet is taken
	I1025 21:32:16.195075   18598 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460 192.168.76.0:0xc000bb12b8] misses:1}
	I1025 21:32:16.195093   18598 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:16.195299   18598 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460] amended:true}} dirty:map[192.168.49.0:0xc000bb13b8 192.168.58.0:0xc000276c80 192.168.67.0:0xc000608460 192.168.76.0:0xc000bb12b8 192.168.85.0:0xc000608168] misses:1}
	I1025 21:32:16.195312   18598 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:16.195318   18598 network_create.go:115] attempt to create docker network bridge-205230 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:32:16.195398   18598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-205230 bridge-205230
	I1025 21:32:16.289125   18598 network_create.go:99] docker network bridge-205230 192.168.85.0/24 created
	I1025 21:32:16.289156   18598 kic.go:106] calculated static IP "192.168.85.2" for the "bridge-205230" container
	I1025 21:32:16.289237   18598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:32:16.351133   18598 cli_runner.go:164] Run: docker volume create bridge-205230 --label name.minikube.sigs.k8s.io=bridge-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:32:16.411792   18598 oci.go:103] Successfully created a docker volume bridge-205230
	I1025 21:32:16.411919   18598 cli_runner.go:164] Run: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:32:16.551185   18598 cli_runner.go:211] docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:32:16.551245   18598 client.go:171] LocalClient.Create took 629.739187ms
	I1025 21:32:18.553653   18598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:18.553813   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:18.619423   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:18.619505   18598 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:18.819546   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:18.886407   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:18.886512   18598 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:19.328853   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:19.389747   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:19.389837   18598 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:19.796295   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:19.860186   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:19.860278   18598 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:20.455776   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:20.517867   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	W1025 21:32:20.517956   18598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	
	W1025 21:32:20.517986   18598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:20.518035   18598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:32:20.518108   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:20.579751   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:20.579836   18598 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:20.849050   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:20.910903   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:20.910985   18598 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:21.424342   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:21.488822   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:21.488910   18598 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:21.937326   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:22.003749   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	W1025 21:32:22.003836   18598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	
	W1025 21:32:22.003864   18598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:22.003875   18598 start.go:128] duration metric: createHost completed in 6.104849464s
	I1025 21:32:22.003939   18598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:22.003982   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:22.064785   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:22.064861   18598 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:22.380440   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:22.443715   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:22.443810   18598 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:22.709395   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:22.777127   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:22.777211   18598 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:23.546130   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:23.609019   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	W1025 21:32:23.609113   18598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	
	W1025 21:32:23.609159   18598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:23.609213   18598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:32:23.609268   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:23.669989   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:23.670067   18598 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:23.928459   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:23.993639   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:23.993725   18598 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:24.194213   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:24.263469   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	I1025 21:32:24.263534   18598 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:24.636247   18598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230
	W1025 21:32:24.702349   18598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230 returned with exit code 1
	W1025 21:32:24.702434   18598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	
	W1025 21:32:24.702477   18598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-205230
	I1025 21:32:24.702485   18598 fix.go:57] fixHost completed within 26.78575004s
	I1025 21:32:24.702492   18598 start.go:83] releasing machines lock for "bridge-205230", held for 26.785790243s
	W1025 21:32:24.702659   18598 out.go:239] * Failed to start docker container. Running "minikube delete -p bridge-205230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p bridge-205230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:32:24.745083   18598 out.go:177] 
	W1025 21:32:24.766481   18598 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-205230 container: docker run --rm --name bridge-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-205230 --entrypoint /usr/bin/test -v bridge-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:32:24.766506   18598 out.go:239] * 
	* 
	W1025 21:32:24.767704   18598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:32:24.832209   18598 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (39.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E1025 21:32:04.195451    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : exit status 80 (39.276744318s)

                                                
                                                
-- stdout --
	* [enable-default-cni-205230] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node enable-default-cni-205230 in cluster enable-default-cni-205230
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-205230" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:31:50.034995   18689 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:31:50.035168   18689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:31:50.035173   18689 out.go:309] Setting ErrFile to fd 2...
	I1025 21:31:50.035176   18689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:31:50.035294   18689 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:31:50.035795   18689 out.go:303] Setting JSON to false
	I1025 21:31:50.050561   18689 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5479,"bootTime":1666753231,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:31:50.050636   18689 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:31:50.072124   18689 out.go:177] * [enable-default-cni-205230] minikube v1.27.1 on Darwin 12.6
	I1025 21:31:50.114905   18689 notify.go:220] Checking for updates...
	I1025 21:31:50.136735   18689 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:31:50.157871   18689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:31:50.179946   18689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:31:50.201879   18689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:31:50.223744   18689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:31:50.244963   18689 config.go:180] Loaded profile config "bridge-205230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:31:50.245038   18689 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:31:50.245084   18689 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:31:50.312435   18689 docker.go:137] docker version: linux-20.10.17
	I1025 21:31:50.312558   18689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:31:50.440198   18689 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:31:50.378122829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:31:50.462254   18689 out.go:177] * Using the docker driver based on user configuration
	I1025 21:31:50.483727   18689 start.go:282] selected driver: docker
	I1025 21:31:50.483764   18689 start.go:808] validating driver "docker" against <nil>
	I1025 21:31:50.483799   18689 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:31:50.486792   18689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:31:50.615209   18689 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:31:50.553683796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:31:50.615324   18689 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	E1025 21:31:50.615469   18689 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1025 21:31:50.615487   18689 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:31:50.637235   18689 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:31:50.659074   18689 cni.go:95] Creating CNI manager for "bridge"
	I1025 21:31:50.659100   18689 start_flags.go:312] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 21:31:50.659138   18689 start_flags.go:317] config:
	{Name:enable-default-cni-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:enable-default-cni-205230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:31:50.701899   18689 out.go:177] * Starting control plane node enable-default-cni-205230 in cluster enable-default-cni-205230
	I1025 21:31:50.745177   18689 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:31:50.766908   18689 out.go:177] * Pulling base image ...
	I1025 21:31:50.788130   18689 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:31:50.788119   18689 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:31:50.788201   18689 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:31:50.788226   18689 cache.go:57] Caching tarball of preloaded images
	I1025 21:31:50.788435   18689 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:31:50.788453   18689 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:31:50.789419   18689 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/enable-default-cni-205230/config.json ...
	I1025 21:31:50.789530   18689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/enable-default-cni-205230/config.json: {Name:mk662081a7067efb862060252f84f8a21c410215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:31:50.851132   18689 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:31:50.851151   18689 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:31:50.851160   18689 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:31:50.851205   18689 start.go:364] acquiring machines lock for enable-default-cni-205230: {Name:mkb1eebe2acf76d1a7238635bd1991add5e97944 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:31:50.851370   18689 start.go:368] acquired machines lock for "enable-default-cni-205230" in 134.063µs
	I1025 21:31:50.851396   18689 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:enable-default-cni-205230 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:31:50.851460   18689 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:31:50.894936   18689 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:31:50.895305   18689 start.go:159] libmachine.API.Create for "enable-default-cni-205230" (driver="docker")
	I1025 21:31:50.895345   18689 client.go:168] LocalClient.Create starting
	I1025 21:31:50.895474   18689 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:31:50.895544   18689 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:50.895574   18689 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:50.895737   18689 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:31:50.895787   18689 main.go:134] libmachine: Decoding PEM data...
	I1025 21:31:50.895805   18689 main.go:134] libmachine: Parsing certificate...
	I1025 21:31:50.896624   18689 cli_runner.go:164] Run: docker network inspect enable-default-cni-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:31:50.958707   18689 cli_runner.go:211] docker network inspect enable-default-cni-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:31:50.958793   18689 network_create.go:272] running [docker network inspect enable-default-cni-205230] to gather additional debugging logs...
	I1025 21:31:50.958807   18689 cli_runner.go:164] Run: docker network inspect enable-default-cni-205230
	W1025 21:31:51.021098   18689 cli_runner.go:211] docker network inspect enable-default-cni-205230 returned with exit code 1
	I1025 21:31:51.021119   18689 network_create.go:275] error running [docker network inspect enable-default-cni-205230]: docker network inspect enable-default-cni-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-205230
	I1025 21:31:51.021130   18689 network_create.go:277] output of [docker network inspect enable-default-cni-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-205230
	
	** /stderr **
	I1025 21:31:51.021211   18689 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:31:51.083608   18689 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8] misses:0}
	I1025 21:31:51.083644   18689 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.083657   18689 network_create.go:115] attempt to create docker network enable-default-cni-205230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:31:51.083719   18689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230
	W1025 21:31:51.144725   18689 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230 returned with exit code 1
	W1025 21:31:51.144772   18689 network_create.go:107] failed to create docker network enable-default-cni-205230 192.168.49.0/24, will retry: subnet is taken
	I1025 21:31:51.145031   18689 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:false}} dirty:map[] misses:0}
	I1025 21:31:51.145047   18689 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.145247   18689 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0] misses:0}
	I1025 21:31:51.145261   18689 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.145271   18689 network_create.go:115] attempt to create docker network enable-default-cni-205230 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:31:51.145326   18689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230
	W1025 21:31:51.205424   18689 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230 returned with exit code 1
	W1025 21:31:51.205454   18689 network_create.go:107] failed to create docker network enable-default-cni-205230 192.168.58.0/24, will retry: subnet is taken
	I1025 21:31:51.205713   18689 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0] misses:1}
	I1025 21:31:51.205729   18689 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.205938   18689 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628] misses:1}
	I1025 21:31:51.205949   18689 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.205956   18689 network_create.go:115] attempt to create docker network enable-default-cni-205230 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:31:51.206010   18689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230
	W1025 21:31:51.265974   18689 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230 returned with exit code 1
	W1025 21:31:51.266021   18689 network_create.go:107] failed to create docker network enable-default-cni-205230 192.168.67.0/24, will retry: subnet is taken
	I1025 21:31:51.266245   18689 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628] misses:2}
	I1025 21:31:51.266265   18689 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.266477   18689 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] misses:2}
	I1025 21:31:51.266488   18689 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:31:51.266497   18689 network_create.go:115] attempt to create docker network enable-default-cni-205230 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:31:51.266558   18689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230
	I1025 21:31:51.361739   18689 network_create.go:99] docker network enable-default-cni-205230 192.168.76.0/24 created
	I1025 21:31:51.361778   18689 kic.go:106] calculated static IP "192.168.76.2" for the "enable-default-cni-205230" container
	I1025 21:31:51.361865   18689 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:31:51.423367   18689 cli_runner.go:164] Run: docker volume create enable-default-cni-205230 --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:31:51.484911   18689 oci.go:103] Successfully created a docker volume enable-default-cni-205230
	I1025 21:31:51.485037   18689 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:31:51.696410   18689 cli_runner.go:211] docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:31:51.696454   18689 client.go:171] LocalClient.Create took 801.096732ms
	I1025 21:31:53.696809   18689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:31:53.696958   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:53.762073   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:53.762163   18689 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:54.039300   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:54.103980   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:54.104063   18689 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:54.646685   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:54.710690   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:54.710795   18689 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:55.368283   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:55.431774   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	W1025 21:31:55.431889   18689 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	
	W1025 21:31:55.431911   18689 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:55.431957   18689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:31:55.431999   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:55.493079   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:55.493175   18689 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:55.725172   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:55.792399   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:55.792475   18689 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:56.239830   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:56.304674   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:56.304766   18689 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:56.625399   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:56.691796   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:31:56.691914   18689 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:57.248251   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:31:57.311727   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	W1025 21:31:57.311836   18689 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	
	W1025 21:31:57.311854   18689 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:31:57.311863   18689 start.go:128] duration metric: createHost completed in 6.460376456s
	I1025 21:31:57.311877   18689 start.go:83] releasing machines lock for "enable-default-cni-205230", held for 6.460472231s
	W1025 21:31:57.311892   18689 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:31:57.312288   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:31:57.373459   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:31:57.373504   18689 delete.go:82] Unable to get host status for enable-default-cni-205230, assuming it has already been deleted: state: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	W1025 21:31:57.373663   18689 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:31:57.373674   18689 start.go:618] Will try again in 5 seconds ...
	I1025 21:32:02.375898   18689 start.go:364] acquiring machines lock for enable-default-cni-205230: {Name:mkb1eebe2acf76d1a7238635bd1991add5e97944 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:02.376074   18689 start.go:368] acquired machines lock for "enable-default-cni-205230" in 140.801µs
	I1025 21:32:02.376102   18689 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:32:02.376117   18689 fix.go:55] fixHost starting: 
	I1025 21:32:02.376533   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:02.438847   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:02.438888   18689 fix.go:103] recreateIfNeeded on enable-default-cni-205230: state= err=unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:02.438922   18689 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:32:02.481721   18689 out.go:177] * docker "enable-default-cni-205230" container is missing, will recreate.
	I1025 21:32:02.503506   18689 delete.go:124] DEMOLISHING enable-default-cni-205230 ...
	I1025 21:32:02.503761   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:02.566545   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	W1025 21:32:02.566586   18689 stop.go:75] unable to get state: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:02.566602   18689 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:02.566971   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:02.627169   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:02.627270   18689 delete.go:82] Unable to get host status for enable-default-cni-205230, assuming it has already been deleted: state: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:02.627356   18689 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-205230
	W1025 21:32:02.687677   18689 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-205230 returned with exit code 1
	I1025 21:32:02.687705   18689 kic.go:356] could not find the container enable-default-cni-205230 to remove it. will try anyways
	I1025 21:32:02.687791   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:02.748257   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	W1025 21:32:02.748304   18689 oci.go:84] error getting container status, will try to delete anyways: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:02.748408   18689 cli_runner.go:164] Run: docker exec --privileged -t enable-default-cni-205230 /bin/bash -c "sudo init 0"
	W1025 21:32:02.808439   18689 cli_runner.go:211] docker exec --privileged -t enable-default-cni-205230 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:32:02.808462   18689 oci.go:646] error shutdown enable-default-cni-205230: docker exec --privileged -t enable-default-cni-205230 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:03.808715   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:03.873990   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:03.874034   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:03.874044   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:03.874074   18689 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:04.276559   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:04.341462   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:04.341503   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:04.341513   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:04.341533   18689 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:05.105379   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:05.170504   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:05.170547   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:05.170563   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:05.170597   18689 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:06.648660   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:06.715704   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:06.715767   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:06.715779   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:06.715801   18689 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:07.922263   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:07.988705   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:07.988747   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:07.988755   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:07.988775   18689 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:10.219809   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:10.286396   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:10.286437   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:10.286448   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:10.286467   18689 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:13.393627   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:13.458052   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:13.458110   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:13.458121   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:13.458141   18689 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:18.978626   18689 cli_runner.go:164] Run: docker container inspect enable-default-cni-205230 --format={{.State.Status}}
	W1025 21:32:19.040578   18689 cli_runner.go:211] docker container inspect enable-default-cni-205230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:19.040617   18689 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:19.040627   18689 oci.go:660] temporary error: container enable-default-cni-205230 status is  but expect it to be exited
	I1025 21:32:19.040653   18689 oci.go:88] couldn't shut down enable-default-cni-205230 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-205230": docker container inspect enable-default-cni-205230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	 
	I1025 21:32:19.040714   18689 cli_runner.go:164] Run: docker rm -f -v enable-default-cni-205230
	I1025 21:32:19.102370   18689 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-205230
	W1025 21:32:19.163419   18689 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-205230 returned with exit code 1
	I1025 21:32:19.163517   18689 cli_runner.go:164] Run: docker network inspect enable-default-cni-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:19.224248   18689 cli_runner.go:164] Run: docker network rm enable-default-cni-205230
	W1025 21:32:19.333193   18689 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:32:19.333211   18689 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:32:20.333846   18689 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:32:20.356184   18689 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:32:20.356343   18689 start.go:159] libmachine.API.Create for "enable-default-cni-205230" (driver="docker")
	I1025 21:32:20.356369   18689 client.go:168] LocalClient.Create starting
	I1025 21:32:20.356501   18689 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:32:20.356618   18689 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:20.356642   18689 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:20.356728   18689 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:32:20.356772   18689 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:20.356786   18689 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:20.357448   18689 cli_runner.go:164] Run: docker network inspect enable-default-cni-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:32:20.420596   18689 cli_runner.go:211] docker network inspect enable-default-cni-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:32:20.420688   18689 network_create.go:272] running [docker network inspect enable-default-cni-205230] to gather additional debugging logs...
	I1025 21:32:20.420703   18689 cli_runner.go:164] Run: docker network inspect enable-default-cni-205230
	W1025 21:32:20.482933   18689 cli_runner.go:211] docker network inspect enable-default-cni-205230 returned with exit code 1
	I1025 21:32:20.482957   18689 network_create.go:275] error running [docker network inspect enable-default-cni-205230]: docker network inspect enable-default-cni-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-205230
	I1025 21:32:20.482972   18689 network_create.go:277] output of [docker network inspect enable-default-cni-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-205230
	
	** /stderr **
	I1025 21:32:20.483049   18689 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:20.546595   18689 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] misses:2}
	I1025 21:32:20.546633   18689 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.546847   18689 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] misses:3}
	I1025 21:32:20.546860   18689 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.547090   18689 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] amended:false}} dirty:map[] misses:0}
	I1025 21:32:20.547103   18689 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.547287   18689 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] amended:false}} dirty:map[] misses:0}
	I1025 21:32:20.547295   18689 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.547481   18689 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660 192.168.85.0:0xc0006e02a0] misses:0}
	I1025 21:32:20.547496   18689 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.547503   18689 network_create.go:115] attempt to create docker network enable-default-cni-205230 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:32:20.547564   18689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230
	W1025 21:32:20.607852   18689 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230 returned with exit code 1
	W1025 21:32:20.607888   18689 network_create.go:107] failed to create docker network enable-default-cni-205230 192.168.85.0/24, will retry: subnet is taken
	I1025 21:32:20.608159   18689 network.go:286] skipping subnet 192.168.85.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660 192.168.85.0:0xc0006e02a0] misses:1}
	I1025 21:32:20.608179   18689 network.go:244] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.608379   18689 network.go:295] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660] amended:true}} dirty:map[192.168.49.0:0xc000e8e5b8 192.168.58.0:0xc000e8e5f0 192.168.67.0:0xc000e8e628 192.168.76.0:0xc000e8e660 192.168.85.0:0xc0006e02a0 192.168.94.0:0xc000572360] misses:1}
	I1025 21:32:20.608395   18689 network.go:241] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:20.608402   18689 network_create.go:115] attempt to create docker network enable-default-cni-205230 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 21:32:20.608461   18689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-205230 enable-default-cni-205230
	I1025 21:32:20.701652   18689 network_create.go:99] docker network enable-default-cni-205230 192.168.94.0/24 created
	I1025 21:32:20.701685   18689 kic.go:106] calculated static IP "192.168.94.2" for the "enable-default-cni-205230" container
	I1025 21:32:20.701788   18689 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:32:20.764195   18689 cli_runner.go:164] Run: docker volume create enable-default-cni-205230 --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:32:20.824692   18689 oci.go:103] Successfully created a docker volume enable-default-cni-205230
	I1025 21:32:20.824817   18689 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:32:20.955013   18689 cli_runner.go:211] docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:32:20.955061   18689 client.go:171] LocalClient.Create took 598.684122ms
	I1025 21:32:22.955378   18689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:22.955474   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:23.020524   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:23.020630   18689 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:23.221325   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:23.285727   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:23.285831   18689 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:23.729275   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:23.791172   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:23.791253   18689 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:24.196202   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:24.263300   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:24.263392   18689 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:24.858080   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:24.922560   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	W1025 21:32:24.922647   18689 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	
	W1025 21:32:24.922662   18689 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:24.922709   18689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:32:24.922764   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:24.986764   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:24.986863   18689 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:25.256714   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:25.318627   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:25.318728   18689 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:25.832008   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:25.913731   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:25.913889   18689 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:26.360862   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:26.422456   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	W1025 21:32:26.422560   18689 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	
	W1025 21:32:26.422575   18689 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:26.422590   18689 start.go:128] duration metric: createHost completed in 6.088702229s
	I1025 21:32:26.422663   18689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:26.422704   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:26.484930   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:26.485025   18689 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:26.798330   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:26.860881   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:26.860968   18689 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:27.126368   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:27.188154   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:27.188236   18689 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:27.958643   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:28.021947   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	W1025 21:32:28.022040   18689 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	
	W1025 21:32:28.022057   18689 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:28.022115   18689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:32:28.022206   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:28.081693   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:28.081776   18689 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:28.340178   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:28.404155   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:28.404242   18689 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:28.604731   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:28.670133   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	I1025 21:32:28.670221   18689 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:29.042801   18689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230
	W1025 21:32:29.107911   18689 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230 returned with exit code 1
	W1025 21:32:29.108018   18689 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	
	W1025 21:32:29.108034   18689 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-205230
	I1025 21:32:29.108046   18689 fix.go:57] fixHost completed within 26.731843865s
	I1025 21:32:29.108053   18689 start.go:83] releasing machines lock for "enable-default-cni-205230", held for 26.731881583s
	W1025 21:32:29.108218   18689 out.go:239] * Failed to start docker container. Running "minikube delete -p enable-default-cni-205230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-205230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:32:29.150657   18689 out.go:177] 
	W1025 21:32:29.172850   18689 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-205230 container: docker run --rm --name enable-default-cni-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-205230 --entrypoint /usr/bin/test -v enable-default-cni-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:32:29.172908   18689 out.go:239] * 
	* 
	W1025 21:32:29.174122   18689 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:32:29.238708   18689 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (39.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (4.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-205230 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : signal: killed (4.856220191s)

                                                
                                                
-- stdout --
	* [kubenet-205230] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubenet-205230 in cluster kubenet-205230
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:32:25.957383   18978 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:32:25.957558   18978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:25.957564   18978 out.go:309] Setting ErrFile to fd 2...
	I1025 21:32:25.957567   18978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:25.957678   18978 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:32:25.958168   18978 out.go:303] Setting JSON to false
	I1025 21:32:25.972820   18978 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5514,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:32:25.972946   18978 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:32:25.995739   18978 out.go:177] * [kubenet-205230] minikube v1.27.1 on Darwin 12.6
	I1025 21:32:26.038597   18978 notify.go:220] Checking for updates...
	I1025 21:32:26.060467   18978 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:32:26.082199   18978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:32:26.103590   18978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:32:26.125613   18978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:32:26.147359   18978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:32:26.169245   18978 config.go:180] Loaded profile config "enable-default-cni-205230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:32:26.169406   18978 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:32:26.169484   18978 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:32:26.237841   18978 docker.go:137] docker version: linux-20.10.17
	I1025 21:32:26.237950   18978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:26.365913   18978 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:32:26.307551294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:32:26.389378   18978 out.go:177] * Using the docker driver based on user configuration
	I1025 21:32:26.410524   18978 start.go:282] selected driver: docker
	I1025 21:32:26.410543   18978 start.go:808] validating driver "docker" against <nil>
	I1025 21:32:26.410560   18978 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:32:26.412699   18978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:26.545224   18978 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:32:26.484796189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:32:26.545341   18978 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:32:26.545492   18978 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:32:26.567415   18978 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:32:26.589178   18978 cni.go:91] network plugin configured as "kubenet", returning disabled
	I1025 21:32:26.589207   18978 start_flags.go:317] config:
	{Name:kubenet-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubenet-205230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:32:26.611230   18978 out.go:177] * Starting control plane node kubenet-205230 in cluster kubenet-205230
	I1025 21:32:26.633092   18978 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:32:26.655031   18978 out.go:177] * Pulling base image ...
	I1025 21:32:26.703152   18978 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:32:26.703231   18978 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:32:26.703240   18978 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:32:26.703254   18978 cache.go:57] Caching tarball of preloaded images
	I1025 21:32:26.703545   18978 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:32:26.703573   18978 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:32:26.704579   18978 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kubenet-205230/config.json ...
	I1025 21:32:26.704691   18978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/kubenet-205230/config.json: {Name:mk0dd5e14dff68a80e3aebc2fe541dcbdf332f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:32:26.767269   18978 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:32:26.767287   18978 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:32:26.767296   18978 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:32:26.767336   18978 start.go:364] acquiring machines lock for kubenet-205230: {Name:mkef14dac8b4be062286d197109946db19e43ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:26.767489   18978 start.go:368] acquired machines lock for "kubenet-205230" in 141.64µs
	I1025 21:32:26.767515   18978 start.go:93] Provisioning new machine with config: &{Name:kubenet-205230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubenet-205230 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:32:26.767580   18978 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:32:26.811091   18978 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 21:32:26.811295   18978 start.go:159] libmachine.API.Create for "kubenet-205230" (driver="docker")
	I1025 21:32:26.811319   18978 client.go:168] LocalClient.Create starting
	I1025 21:32:26.811390   18978 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:32:26.811426   18978 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:26.811441   18978 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:26.811504   18978 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:32:26.811527   18978 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:26.811538   18978 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:26.811968   18978 cli_runner.go:164] Run: docker network inspect kubenet-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:32:26.874047   18978 cli_runner.go:211] docker network inspect kubenet-205230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:32:26.874134   18978 network_create.go:272] running [docker network inspect kubenet-205230] to gather additional debugging logs...
	I1025 21:32:26.874151   18978 cli_runner.go:164] Run: docker network inspect kubenet-205230
	W1025 21:32:26.935419   18978 cli_runner.go:211] docker network inspect kubenet-205230 returned with exit code 1
	I1025 21:32:26.935441   18978 network_create.go:275] error running [docker network inspect kubenet-205230]: docker network inspect kubenet-205230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-205230
	I1025 21:32:26.935454   18978 network_create.go:277] output of [docker network inspect kubenet-205230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-205230
	
	** /stderr **
	I1025 21:32:26.935517   18978 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:26.996381   18978 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000f0c2a0] misses:0}
	I1025 21:32:26.996418   18978 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:26.996431   18978 network_create.go:115] attempt to create docker network kubenet-205230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:32:26.996508   18978 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-205230 kubenet-205230
	W1025 21:32:27.057626   18978 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-205230 kubenet-205230 returned with exit code 1
	W1025 21:32:27.057661   18978 network_create.go:107] failed to create docker network kubenet-205230 192.168.49.0/24, will retry: subnet is taken
	I1025 21:32:27.057932   18978 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f0c2a0] amended:false}} dirty:map[] misses:0}
	I1025 21:32:27.057948   18978 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:27.058177   18978 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f0c2a0] amended:true}} dirty:map[192.168.49.0:0xc000f0c2a0 192.168.58.0:0xc000f0c2d8] misses:0}
	I1025 21:32:27.058192   18978 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:27.058203   18978 network_create.go:115] attempt to create docker network kubenet-205230 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:32:27.058267   18978 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-205230 kubenet-205230
	W1025 21:32:27.119377   18978 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-205230 kubenet-205230 returned with exit code 1
	W1025 21:32:27.119412   18978 network_create.go:107] failed to create docker network kubenet-205230 192.168.58.0/24, will retry: subnet is taken
	I1025 21:32:27.119691   18978 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f0c2a0] amended:true}} dirty:map[192.168.49.0:0xc000f0c2a0 192.168.58.0:0xc000f0c2d8] misses:1}
	I1025 21:32:27.119709   18978 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:27.120423   18978 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f0c2a0] amended:true}} dirty:map[192.168.49.0:0xc000f0c2a0 192.168.58.0:0xc000f0c2d8 192.168.67.0:0xc000ba3660] misses:1}
	I1025 21:32:27.120453   18978 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:27.120474   18978 network_create.go:115] attempt to create docker network kubenet-205230 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:32:27.120568   18978 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-205230 kubenet-205230
	I1025 21:32:27.214492   18978 network_create.go:99] docker network kubenet-205230 192.168.67.0/24 created
	I1025 21:32:27.214529   18978 kic.go:106] calculated static IP "192.168.67.2" for the "kubenet-205230" container
	I1025 21:32:27.214611   18978 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:32:27.276305   18978 cli_runner.go:164] Run: docker volume create kubenet-205230 --label name.minikube.sigs.k8s.io=kubenet-205230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:32:27.338377   18978 oci.go:103] Successfully created a docker volume kubenet-205230
	I1025 21:32:27.338501   18978 cli_runner.go:164] Run: docker run --rm --name kubenet-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-205230 --entrypoint /usr/bin/test -v kubenet-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:32:27.555973   18978 cli_runner.go:211] docker run --rm --name kubenet-205230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-205230 --entrypoint /usr/bin/test -v kubenet-205230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:32:27.556017   18978 client.go:171] LocalClient.Create took 744.688271ms
	I1025 21:32:29.556333   18978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:29.556415   18978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230
	W1025 21:32:29.618681   18978 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230 returned with exit code 1
	I1025 21:32:29.618799   18978 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-205230
	I1025 21:32:29.895829   18978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230
	W1025 21:32:29.956761   18978 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230 returned with exit code 1
	I1025 21:32:29.956857   18978 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-205230
	I1025 21:32:30.499264   18978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230
	W1025 21:32:30.562606   18978 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230 returned with exit code 1
	I1025 21:32:30.562713   18978 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-205230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-205230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-205230

                                                
                                                
** /stderr **
net_test.go:103: failed start: signal: killed
--- FAIL: TestNetworkPlugins/group/kubenet/Start (4.86s)
E1025 21:36:36.101606    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
E1025 21:37:04.194437    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/skaffold-205120/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (39.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-213230 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-213230 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 80 (39.669255503s)

                                                
                                                
-- stdout --
	* [old-k8s-version-213230] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-213230 in cluster old-k8s-version-213230
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-213230" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:32:30.324184   19069 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:32:30.324887   19069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:30.324896   19069 out.go:309] Setting ErrFile to fd 2...
	I1025 21:32:30.324905   19069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:30.325157   19069 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:32:30.325900   19069 out.go:303] Setting JSON to false
	I1025 21:32:30.340576   19069 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5519,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:32:30.340693   19069 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:32:30.362630   19069 out.go:177] * [old-k8s-version-213230] minikube v1.27.1 on Darwin 12.6
	I1025 21:32:30.405771   19069 notify.go:220] Checking for updates...
	I1025 21:32:30.427425   19069 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:32:30.448763   19069 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:32:30.470528   19069 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:32:30.491354   19069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:32:30.512465   19069 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:32:30.533863   19069 config.go:180] Loaded profile config "kubenet-205230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:32:30.533934   19069 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:32:30.533985   19069 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:32:30.601611   19069 docker.go:137] docker version: linux-20.10.17
	I1025 21:32:30.601760   19069 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:30.729900   19069 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:32:30.6748098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:
{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:32:30.751782   19069 out.go:177] * Using the docker driver based on user configuration
	I1025 21:32:30.773644   19069 start.go:282] selected driver: docker
	I1025 21:32:30.773660   19069 start.go:808] validating driver "docker" against <nil>
	I1025 21:32:30.773680   19069 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:32:30.776156   19069 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:30.908193   19069 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:32:30.850908561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:32:30.908302   19069 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:32:30.908468   19069 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:32:30.930183   19069 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:32:30.950862   19069 cni.go:95] Creating CNI manager for ""
	I1025 21:32:30.950891   19069 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:32:30.950917   19069 start_flags.go:317] config:
	{Name:old-k8s-version-213230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-213230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:32:30.973179   19069 out.go:177] * Starting control plane node old-k8s-version-213230 in cluster old-k8s-version-213230
	I1025 21:32:31.016845   19069 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:32:31.038092   19069 out.go:177] * Pulling base image ...
	I1025 21:32:31.097085   19069 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 21:32:31.097098   19069 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:32:31.097187   19069 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 21:32:31.097220   19069 cache.go:57] Caching tarball of preloaded images
	I1025 21:32:31.098060   19069 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:32:31.098255   19069 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 21:32:31.098696   19069 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/old-k8s-version-213230/config.json ...
	I1025 21:32:31.098760   19069 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/old-k8s-version-213230/config.json: {Name:mkc54d0712ff479380ed269280d0d036c4ae4986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:32:31.161503   19069 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:32:31.161524   19069 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:32:31.161533   19069 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:32:31.161569   19069 start.go:364] acquiring machines lock for old-k8s-version-213230: {Name:mkf15d742925eff5dfa273d5f3f97b7bc6f95cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:31.161723   19069 start.go:368] acquired machines lock for "old-k8s-version-213230" in 142.644µs
	I1025 21:32:31.161749   19069 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-213230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-213230 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:32:31.161850   19069 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:32:31.203888   19069 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:32:31.204124   19069 start.go:159] libmachine.API.Create for "old-k8s-version-213230" (driver="docker")
	I1025 21:32:31.204148   19069 client.go:168] LocalClient.Create starting
	I1025 21:32:31.204223   19069 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:32:31.204274   19069 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:31.204297   19069 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:31.204387   19069 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:32:31.204421   19069 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:31.204443   19069 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:31.204872   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:32:31.267663   19069 cli_runner.go:211] docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:32:31.267758   19069 network_create.go:272] running [docker network inspect old-k8s-version-213230] to gather additional debugging logs...
	I1025 21:32:31.267771   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230
	W1025 21:32:31.329580   19069 cli_runner.go:211] docker network inspect old-k8s-version-213230 returned with exit code 1
	I1025 21:32:31.329604   19069 network_create.go:275] error running [docker network inspect old-k8s-version-213230]: docker network inspect old-k8s-version-213230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-213230
	I1025 21:32:31.329616   19069 network_create.go:277] output of [docker network inspect old-k8s-version-213230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-213230
	
	** /stderr **
	I1025 21:32:31.329676   19069 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:31.391913   19069 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000548528] misses:0}
	I1025 21:32:31.391949   19069 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.391968   19069 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:32:31.392050   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	W1025 21:32:31.454024   19069 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230 returned with exit code 1
	W1025 21:32:31.454078   19069 network_create.go:107] failed to create docker network old-k8s-version-213230 192.168.49.0/24, will retry: subnet is taken
	I1025 21:32:31.454353   19069 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:false}} dirty:map[] misses:0}
	I1025 21:32:31.454369   19069 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.454594   19069 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0] misses:0}
	I1025 21:32:31.454618   19069 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.454628   19069 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:32:31.454693   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	W1025 21:32:31.518273   19069 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230 returned with exit code 1
	W1025 21:32:31.518327   19069 network_create.go:107] failed to create docker network old-k8s-version-213230 192.168.58.0/24, will retry: subnet is taken
	I1025 21:32:31.518570   19069 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0] misses:1}
	I1025 21:32:31.518587   19069 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.518832   19069 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8] misses:1}
	I1025 21:32:31.518845   19069 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.518853   19069 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:32:31.518918   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	W1025 21:32:31.582632   19069 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230 returned with exit code 1
	W1025 21:32:31.582665   19069 network_create.go:107] failed to create docker network old-k8s-version-213230 192.168.67.0/24, will retry: subnet is taken
	I1025 21:32:31.582927   19069 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8] misses:2}
	I1025 21:32:31.582944   19069 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.583170   19069 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310] misses:2}
	I1025 21:32:31.583191   19069 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:31.583200   19069 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:32:31.583270   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	I1025 21:32:31.677519   19069 network_create.go:99] docker network old-k8s-version-213230 192.168.76.0/24 created
	I1025 21:32:31.677563   19069 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-213230" container
	I1025 21:32:31.677647   19069 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:32:31.740257   19069 cli_runner.go:164] Run: docker volume create old-k8s-version-213230 --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:32:31.802346   19069 oci.go:103] Successfully created a docker volume old-k8s-version-213230
	I1025 21:32:31.802476   19069 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:32:32.432167   19069 cli_runner.go:211] docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:32:32.432220   19069 client.go:171] LocalClient.Create took 1.228060224s
	I1025 21:32:34.434612   19069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:34.434724   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:34.498316   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:34.498434   19069 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:34.777045   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:34.840630   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:34.840736   19069 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:35.383379   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:35.447838   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:35.447931   19069 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:36.105181   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:36.166384   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:32:36.166478   19069 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:32:36.166494   19069 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:36.166557   19069 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:32:36.166600   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:36.227663   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:36.227751   19069 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:36.461304   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:36.525898   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:36.525987   19069 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:36.972504   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:37.035675   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:37.035772   19069 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:37.356336   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:37.421985   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:32:37.422071   19069 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:37.978400   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:32:38.040656   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:32:38.040749   19069 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:32:38.040766   19069 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:38.040778   19069 start.go:128] duration metric: createHost completed in 6.878900117s
	I1025 21:32:38.040785   19069 start.go:83] releasing machines lock for "old-k8s-version-213230", held for 6.879031266s
	W1025 21:32:38.040798   19069 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:32:38.041186   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:38.101501   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:38.101541   19069 delete.go:82] Unable to get host status for old-k8s-version-213230, assuming it has already been deleted: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:32:38.101691   19069 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:32:38.101703   19069 start.go:618] Will try again in 5 seconds ...
	I1025 21:32:43.102371   19069 start.go:364] acquiring machines lock for old-k8s-version-213230: {Name:mkf15d742925eff5dfa273d5f3f97b7bc6f95cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:43.102519   19069 start.go:368] acquired machines lock for "old-k8s-version-213230" in 113.195µs
	I1025 21:32:43.102548   19069 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:32:43.102563   19069 fix.go:55] fixHost starting: 
	I1025 21:32:43.102956   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:43.166600   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:43.166644   19069 fix.go:103] recreateIfNeeded on old-k8s-version-213230: state= err=unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:43.166664   19069 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:32:43.188289   19069 out.go:177] * docker "old-k8s-version-213230" container is missing, will recreate.
	I1025 21:32:43.209013   19069 delete.go:124] DEMOLISHING old-k8s-version-213230 ...
	I1025 21:32:43.209241   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:43.270894   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:32:43.270940   19069 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:43.270961   19069 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:43.271295   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:43.331785   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:43.331829   19069 delete.go:82] Unable to get host status for old-k8s-version-213230, assuming it has already been deleted: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:43.331898   19069 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-213230
	W1025 21:32:43.392299   19069 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-213230 returned with exit code 1
	I1025 21:32:43.392324   19069 kic.go:356] could not find the container old-k8s-version-213230 to remove it. will try anyways
	I1025 21:32:43.392424   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:43.452766   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:32:43.452812   19069 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:43.452884   19069 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0"
	W1025 21:32:43.513744   19069 cli_runner.go:211] docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:32:43.513767   19069 oci.go:646] error shutdown old-k8s-version-213230: docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:44.513869   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:44.576504   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:44.576551   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:44.576561   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:44.576580   19069 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:44.978961   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:45.041759   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:45.041799   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:45.041820   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:45.041838   19069 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:45.804615   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:45.872654   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:45.872691   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:45.872703   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:45.872722   19069 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:47.351243   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:47.415998   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:47.416049   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:47.416065   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:47.416083   19069 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:48.623771   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:48.690769   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:48.690811   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:48.690825   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:48.690845   19069 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:50.922431   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:50.990136   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:50.990186   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:50.990206   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:50.990227   19069 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:54.098688   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:54.162990   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:54.163040   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:54.163055   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:54.163076   19069 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:59.683568   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:32:59.748974   19069 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:59.749013   19069 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:32:59.749025   19069 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:32:59.749053   19069 oci.go:88] couldn't shut down old-k8s-version-213230 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	 
	I1025 21:32:59.749110   19069 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-213230
	I1025 21:32:59.813073   19069 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-213230
	W1025 21:32:59.873428   19069 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-213230 returned with exit code 1
	I1025 21:32:59.873517   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:59.934471   19069 cli_runner.go:164] Run: docker network rm old-k8s-version-213230
	W1025 21:33:00.054433   19069 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:33:00.054450   19069 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:33:01.056663   19069 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:33:01.078746   19069 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:33:01.078850   19069 start.go:159] libmachine.API.Create for "old-k8s-version-213230" (driver="docker")
	I1025 21:33:01.078868   19069 client.go:168] LocalClient.Create starting
	I1025 21:33:01.078948   19069 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:33:01.078984   19069 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:01.078997   19069 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:01.079039   19069 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:33:01.079062   19069 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:01.079071   19069 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:01.099965   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:33:01.164343   19069 cli_runner.go:211] docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:33:01.164423   19069 network_create.go:272] running [docker network inspect old-k8s-version-213230] to gather additional debugging logs...
	I1025 21:33:01.164441   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230
	W1025 21:33:01.228230   19069 cli_runner.go:211] docker network inspect old-k8s-version-213230 returned with exit code 1
	I1025 21:33:01.228255   19069 network_create.go:275] error running [docker network inspect old-k8s-version-213230]: docker network inspect old-k8s-version-213230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-213230
	I1025 21:33:01.228282   19069 network_create.go:277] output of [docker network inspect old-k8s-version-213230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-213230
	
	** /stderr **
	I1025 21:33:01.228354   19069 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:01.292365   19069 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310] misses:2}
	I1025 21:33:01.292395   19069 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:01.292591   19069 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310] misses:3}
	I1025 21:33:01.292601   19069 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:01.292803   19069 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310] amended:false}} dirty:map[] misses:0}
	I1025 21:33:01.292818   19069 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:01.293025   19069 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310] amended:false}} dirty:map[] misses:0}
	I1025 21:33:01.293034   19069 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:01.293253   19069 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310] amended:true}} dirty:map[192.168.49.0:0xc000548528 192.168.58.0:0xc0006dc2a0 192.168.67.0:0xc0006dc2d8 192.168.76.0:0xc0006dc310 192.168.85.0:0xc0006dc4a8] misses:0}
	I1025 21:33:01.293271   19069 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:01.293280   19069 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:33:01.293345   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	I1025 21:33:01.383725   19069 network_create.go:99] docker network old-k8s-version-213230 192.168.85.0/24 created
	I1025 21:33:01.383757   19069 kic.go:106] calculated static IP "192.168.85.2" for the "old-k8s-version-213230" container
	I1025 21:33:01.383853   19069 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:33:01.446494   19069 cli_runner.go:164] Run: docker volume create old-k8s-version-213230 --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:33:01.508596   19069 oci.go:103] Successfully created a docker volume old-k8s-version-213230
	I1025 21:33:01.508848   19069 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:33:01.638202   19069 cli_runner.go:211] docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:33:01.638252   19069 client.go:171] LocalClient.Create took 559.377579ms
	I1025 21:33:03.639332   19069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:03.639428   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:03.703810   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:03.703904   19069 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:03.902478   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:03.967340   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:03.967436   19069 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:04.411933   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:04.478100   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:04.478193   19069 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:04.884752   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:04.950367   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:04.950450   19069 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:05.545106   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:05.610622   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:05.610711   19069 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:05.610735   19069 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:05.610784   19069 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:05.610821   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:05.671069   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:05.671159   19069 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:05.941186   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:06.007591   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:06.007673   19069 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:06.519048   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:06.584286   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:06.584372   19069 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:07.030861   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:07.096097   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:07.096204   19069 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:07.096228   19069 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:07.096237   19069 start.go:128] duration metric: createHost completed in 6.039539712s
	I1025 21:33:07.096297   19069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:07.096341   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:07.159053   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:07.159147   19069 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:07.474552   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:07.538581   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:07.538672   19069 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:07.804833   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:07.866295   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:07.866389   19069 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:08.636760   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:08.702633   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:08.702732   19069 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:08.702754   19069 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:08.702799   19069 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:08.702849   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:08.764226   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:08.764316   19069 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:09.022639   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:09.090661   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:09.090757   19069 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:09.291014   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:09.354890   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:09.354998   19069 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:09.727676   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:09.792125   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:09.792234   19069 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:09.792268   19069 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:09.792280   19069 fix.go:57] fixHost completed within 26.689632136s
	I1025 21:33:09.792287   19069 start.go:83] releasing machines lock for "old-k8s-version-213230", held for 26.689671407s
	W1025 21:33:09.792448   19069 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-213230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-213230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:33:09.835917   19069 out.go:177] 
	W1025 21:33:09.856932   19069 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:33:09.856959   19069 out.go:239] * 
	* 
	W1025 21:33:09.858106   19069 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:33:09.920619   19069 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-213230 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "138a79bcd7014f40625b0396c0cc759a5bf40b1a8bb16f58bf54a2acb78a98e5",
	        "Created": "2022-10-26T04:33:01.367345574Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (111.931156ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:10.132044   19417 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (39.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (39.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-213232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-213232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (39.22271962s)

                                                
                                                
-- stdout --
	* [no-preload-213232] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node no-preload-213232 in cluster no-preload-213232
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-213232" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:32:32.065463   19144 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:32:32.065973   19144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:32.065980   19144 out.go:309] Setting ErrFile to fd 2...
	I1025 21:32:32.065986   19144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:32.066237   19144 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:32:32.067042   19144 out.go:303] Setting JSON to false
	I1025 21:32:32.082165   19144 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5521,"bootTime":1666753231,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:32:32.082295   19144 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:32:32.104386   19144 out.go:177] * [no-preload-213232] minikube v1.27.1 on Darwin 12.6
	I1025 21:32:32.124793   19144 notify.go:220] Checking for updates...
	I1025 21:32:32.146837   19144 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:32:32.205790   19144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:32:32.253996   19144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:32:32.286769   19144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:32:32.332903   19144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:32:32.355882   19144 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:32:32.356099   19144 config.go:180] Loaded profile config "old-k8s-version-213230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 21:32:32.356176   19144 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:32:32.425584   19144 docker.go:137] docker version: linux-20.10.17
	I1025 21:32:32.425751   19144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:32.554860   19144 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:32:32.500778774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:32:32.578181   19144 out.go:177] * Using the docker driver based on user configuration
	I1025 21:32:32.604958   19144 start.go:282] selected driver: docker
	I1025 21:32:32.604984   19144 start.go:808] validating driver "docker" against <nil>
	I1025 21:32:32.605007   19144 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:32:32.608470   19144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:32.738371   19144 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:32:32.68367923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:32:32.738485   19144 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:32:32.738641   19144 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:32:32.759166   19144 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:32:32.781352   19144 cni.go:95] Creating CNI manager for ""
	I1025 21:32:32.781382   19144 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:32:32.781397   19144 start_flags.go:317] config:
	{Name:no-preload-213232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-213232 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:32:32.803269   19144 out.go:177] * Starting control plane node no-preload-213232 in cluster no-preload-213232
	I1025 21:32:32.824950   19144 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:32:32.846054   19144 out.go:177] * Pulling base image ...
	I1025 21:32:32.867176   19144 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:32:32.867188   19144 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:32:32.867386   19144 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/no-preload-213232/config.json ...
	I1025 21:32:32.867437   19144 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/no-preload-213232/config.json: {Name:mk93dff9028b1c0c2238a2ed89c52650dd9f2510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:32:32.867527   19144 cache.go:107] acquiring lock: {Name:mk9496eca59ca8d1cbd01dfb5f76b68b912ca8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.867602   19144 cache.go:107] acquiring lock: {Name:mkbb578537582a40800c4eeced6a7027b4a94c0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.867611   19144 cache.go:107] acquiring lock: {Name:mke6a74e8037e86be3f77efd2f3ae0ed51bdab2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.867538   19144 cache.go:107] acquiring lock: {Name:mke6e041073c846ecb833c53066e7029cc1b89cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.869150   19144 cache.go:107] acquiring lock: {Name:mk9254c163158bba7ec1e073185cdb240af77bd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.869569   19144 cache.go:107] acquiring lock: {Name:mk86cad4351afc40081233a18a264f52ef6cc915 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.869602   19144 cache.go:107] acquiring lock: {Name:mk407eb5c2bca35e6b95c015dbc18b7fb7a7319d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.869694   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists
	I1025 21:32:32.869697   19144 cache.go:107] acquiring lock: {Name:mk76e961842b5a32a7ee23f80ba0702d856358cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.869758   19144 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 2.174866ms
	I1025 21:32:32.869776   19144 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded
	I1025 21:32:32.869791   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists
	I1025 21:32:32.870394   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 21:32:32.870407   19144 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.881867ms
	I1025 21:32:32.870416   19144 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 21:32:32.870456   19144 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 2.25356ms
	I1025 21:32:32.870477   19144 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded
	I1025 21:32:32.870195   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists
	I1025 21:32:32.870550   19144 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 3.053387ms
	I1025 21:32:32.870470   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I1025 21:32:32.870571   19144 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded
	I1025 21:32:32.870575   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I1025 21:32:32.870583   19144 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 1.093102ms
	I1025 21:32:32.870614   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I1025 21:32:32.870604   19144 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 1.284167ms
	I1025 21:32:32.870574   19144 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists
	I1025 21:32:32.870624   19144 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I1025 21:32:32.870627   19144 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I1025 21:32:32.870625   19144 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 1.152925ms
	I1025 21:32:32.870639   19144 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I1025 21:32:32.870639   19144 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 3.029202ms
	I1025 21:32:32.870647   19144 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded
	I1025 21:32:32.870663   19144 cache.go:87] Successfully saved all images to host disk.
	I1025 21:32:32.930758   19144 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:32:32.930784   19144 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:32:32.930793   19144 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:32:32.930856   19144 start.go:364] acquiring machines lock for no-preload-213232: {Name:mk0ecc979bb14f8cd1ca75a3ba2690326b8c6623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:32.931017   19144 start.go:368] acquired machines lock for "no-preload-213232" in 149.16µs
	I1025 21:32:32.931041   19144 start.go:93] Provisioning new machine with config: &{Name:no-preload-213232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-213232 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:32:32.931138   19144 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:32:32.974534   19144 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:32:32.974854   19144 start.go:159] libmachine.API.Create for "no-preload-213232" (driver="docker")
	I1025 21:32:32.974880   19144 client.go:168] LocalClient.Create starting
	I1025 21:32:32.974978   19144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:32:32.975025   19144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:32.975042   19144 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:32.975112   19144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:32:32.975143   19144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:32:32.975181   19144 main.go:134] libmachine: Parsing certificate...
	I1025 21:32:32.975747   19144 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:32:33.037897   19144 cli_runner.go:211] docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:32:33.037988   19144 network_create.go:272] running [docker network inspect no-preload-213232] to gather additional debugging logs...
	I1025 21:32:33.038028   19144 cli_runner.go:164] Run: docker network inspect no-preload-213232
	W1025 21:32:33.099328   19144 cli_runner.go:211] docker network inspect no-preload-213232 returned with exit code 1
	I1025 21:32:33.099363   19144 network_create.go:275] error running [docker network inspect no-preload-213232]: docker network inspect no-preload-213232: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-213232
	I1025 21:32:33.099377   19144 network_create.go:277] output of [docker network inspect no-preload-213232]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-213232
	
	** /stderr **
	I1025 21:32:33.099444   19144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:32:33.160609   19144 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b48198] misses:0}
	I1025 21:32:33.160647   19144 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:33.160660   19144 network_create.go:115] attempt to create docker network no-preload-213232 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:32:33.160738   19144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	W1025 21:32:33.221431   19144 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232 returned with exit code 1
	W1025 21:32:33.221470   19144 network_create.go:107] failed to create docker network no-preload-213232 192.168.49.0/24, will retry: subnet is taken
	I1025 21:32:33.221742   19144 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198] amended:false}} dirty:map[] misses:0}
	I1025 21:32:33.221759   19144 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:33.221960   19144 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198] amended:true}} dirty:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290] misses:0}
	I1025 21:32:33.221977   19144 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:33.221999   19144 network_create.go:115] attempt to create docker network no-preload-213232 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:32:33.222066   19144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	W1025 21:32:33.282916   19144 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232 returned with exit code 1
	W1025 21:32:33.282948   19144 network_create.go:107] failed to create docker network no-preload-213232 192.168.58.0/24, will retry: subnet is taken
	I1025 21:32:33.283206   19144 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198] amended:true}} dirty:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290] misses:1}
	I1025 21:32:33.283225   19144 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:33.283473   19144 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198] amended:true}} dirty:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290 192.168.67.0:0xc000b481d0] misses:1}
	I1025 21:32:33.283486   19144 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:32:33.283492   19144 network_create.go:115] attempt to create docker network no-preload-213232 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:32:33.283568   19144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	I1025 21:32:33.374754   19144 network_create.go:99] docker network no-preload-213232 192.168.67.0/24 created
	I1025 21:32:33.374783   19144 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-213232" container
	I1025 21:32:33.374898   19144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:32:33.435896   19144 cli_runner.go:164] Run: docker volume create no-preload-213232 --label name.minikube.sigs.k8s.io=no-preload-213232 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:32:33.498446   19144 oci.go:103] Successfully created a docker volume no-preload-213232
	I1025 21:32:33.498567   19144 cli_runner.go:164] Run: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:32:33.717643   19144 cli_runner.go:211] docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:32:33.717695   19144 client.go:171] LocalClient.Create took 742.805538ms
	I1025 21:32:35.720076   19144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:35.720178   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:35.785460   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:35.785548   19144 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:36.062536   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:36.127902   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:36.127979   19144 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:36.670529   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:36.733937   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:36.734017   19144 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:37.391264   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:37.453900   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:32:37.453994   19144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:32:37.454014   19144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:37.454061   19144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:32:37.454116   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:37.513915   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:37.513991   19144 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:37.747502   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:37.814301   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:37.814381   19144 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:38.261844   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:38.328574   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:38.328654   19144 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:38.647608   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:38.713757   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:32:38.713837   19144 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:39.270209   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:32:39.334285   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:32:39.334369   19144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:32:39.334386   19144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:39.334407   19144 start.go:128] duration metric: createHost completed in 6.403243625s
	I1025 21:32:39.334415   19144 start.go:83] releasing machines lock for "no-preload-213232", held for 6.403370472s
	W1025 21:32:39.334429   19144 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:32:39.334853   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:39.395027   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:39.395075   19144 delete.go:82] Unable to get host status for no-preload-213232, assuming it has already been deleted: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:32:39.395210   19144 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:32:39.395219   19144 start.go:618] Will try again in 5 seconds ...
	I1025 21:32:44.397368   19144 start.go:364] acquiring machines lock for no-preload-213232: {Name:mk0ecc979bb14f8cd1ca75a3ba2690326b8c6623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:32:44.397635   19144 start.go:368] acquired machines lock for "no-preload-213232" in 202.118µs
	I1025 21:32:44.397664   19144 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:32:44.397680   19144 fix.go:55] fixHost starting: 
	I1025 21:32:44.398106   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:44.463323   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:44.463364   19144 fix.go:103] recreateIfNeeded on no-preload-213232: state= err=unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:44.463384   19144 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:32:44.485363   19144 out.go:177] * docker "no-preload-213232" container is missing, will recreate.
	I1025 21:32:44.506698   19144 delete.go:124] DEMOLISHING no-preload-213232 ...
	I1025 21:32:44.506947   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:44.571048   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:32:44.571088   19144 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:44.571101   19144 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:44.571514   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:44.632622   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:44.632665   19144 delete.go:82] Unable to get host status for no-preload-213232, assuming it has already been deleted: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:44.632757   19144 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-213232
	W1025 21:32:44.693353   19144 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-213232 returned with exit code 1
	I1025 21:32:44.693380   19144 kic.go:356] could not find the container no-preload-213232 to remove it. will try anyways
	I1025 21:32:44.693453   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:44.753630   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:32:44.753669   19144 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:44.753731   19144 cli_runner.go:164] Run: docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0"
	W1025 21:32:44.813484   19144 cli_runner.go:211] docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:32:44.813510   19144 oci.go:646] error shutdown no-preload-213232: docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:45.814060   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:45.878360   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:45.878396   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:45.878406   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:45.878423   19144 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:46.281157   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:46.343040   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:46.343081   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:46.343092   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:46.343111   19144 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:47.106798   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:47.171710   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:47.171749   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:47.171760   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:47.171794   19144 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:48.649810   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:48.713845   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:48.713885   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:48.713914   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:48.713934   19144 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:49.921638   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:49.986903   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:49.986945   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:49.986956   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:49.986975   19144 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:52.217108   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:52.282639   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:52.282691   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:52.282705   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:52.282723   19144 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:55.391184   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:32:55.454673   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:32:55.454711   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:32:55.454722   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:32:55.454741   19144 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:00.973512   19144 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:01.039473   19144 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:01.039517   19144 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:01.039528   19144 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:01.039553   19144 oci.go:88] couldn't shut down no-preload-213232 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	 
	I1025 21:33:01.039613   19144 cli_runner.go:164] Run: docker rm -f -v no-preload-213232
	I1025 21:33:01.110448   19144 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-213232
	W1025 21:33:01.173215   19144 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-213232 returned with exit code 1
	I1025 21:33:01.173318   19144 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:01.236036   19144 cli_runner.go:164] Run: docker network rm no-preload-213232
	W1025 21:33:01.340890   19144 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:33:01.340908   19144 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:33:02.341055   19144 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:33:02.363160   19144 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:33:02.363370   19144 start.go:159] libmachine.API.Create for "no-preload-213232" (driver="docker")
	I1025 21:33:02.363400   19144 client.go:168] LocalClient.Create starting
	I1025 21:33:02.363517   19144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:33:02.363584   19144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:02.363608   19144 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:02.363673   19144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:33:02.363716   19144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:02.363730   19144 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:02.364452   19144 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:33:02.428116   19144 cli_runner.go:211] docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:33:02.428183   19144 network_create.go:272] running [docker network inspect no-preload-213232] to gather additional debugging logs...
	I1025 21:33:02.428202   19144 cli_runner.go:164] Run: docker network inspect no-preload-213232
	W1025 21:33:02.488868   19144 cli_runner.go:211] docker network inspect no-preload-213232 returned with exit code 1
	I1025 21:33:02.488891   19144 network_create.go:275] error running [docker network inspect no-preload-213232]: docker network inspect no-preload-213232: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-213232
	I1025 21:33:02.488905   19144 network_create.go:277] output of [docker network inspect no-preload-213232]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-213232
	
	** /stderr **
	I1025 21:33:02.489009   19144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:02.550352   19144 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198] amended:true}} dirty:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290 192.168.67.0:0xc000b481d0] misses:1}
	I1025 21:33:02.550379   19144 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:02.550611   19144 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198] amended:true}} dirty:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290 192.168.67.0:0xc000b481d0] misses:2}
	I1025 21:33:02.550623   19144 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:02.550822   19144 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290 192.168.67.0:0xc000b481d0] amended:false}} dirty:map[] misses:0}
	I1025 21:33:02.550831   19144 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:02.551027   19144 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290 192.168.67.0:0xc000b481d0] amended:true}} dirty:map[192.168.49.0:0xc000b48198 192.168.58.0:0xc0004c4290 192.168.67.0:0xc000b481d0 192.168.76.0:0xc0004c4438] misses:0}
	I1025 21:33:02.551040   19144 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:02.551046   19144 network_create.go:115] attempt to create docker network no-preload-213232 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:33:02.551113   19144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	I1025 21:33:02.641759   19144 network_create.go:99] docker network no-preload-213232 192.168.76.0/24 created
	I1025 21:33:02.641791   19144 kic.go:106] calculated static IP "192.168.76.2" for the "no-preload-213232" container
	I1025 21:33:02.641869   19144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:33:02.704683   19144 cli_runner.go:164] Run: docker volume create no-preload-213232 --label name.minikube.sigs.k8s.io=no-preload-213232 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:33:02.765406   19144 oci.go:103] Successfully created a docker volume no-preload-213232
	I1025 21:33:02.765523   19144 cli_runner.go:164] Run: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:33:02.896424   19144 cli_runner.go:211] docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:33:02.896462   19144 client.go:171] LocalClient.Create took 533.052951ms
	I1025 21:33:04.896673   19144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:04.896734   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:04.958238   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:04.958318   19144 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:05.158797   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:05.222307   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:05.222421   19144 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:05.664832   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:05.726021   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:05.726107   19144 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:06.132623   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:06.197510   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:06.197615   19144 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:06.793271   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:06.857170   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:06.857264   19144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:06.857284   19144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:06.857335   19144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:06.857375   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:06.920555   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:06.920641   19144 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:07.190647   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:07.253530   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:07.253625   19144 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:07.766929   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:07.831171   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:07.831262   19144 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:08.277679   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:08.340507   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:08.340603   19144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:08.340620   19144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:08.340644   19144 start.go:128] duration metric: createHost completed in 5.999471985s
	I1025 21:33:08.340714   19144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:08.340752   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:08.401896   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:08.401971   19144 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:08.715587   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:08.776505   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:08.776597   19144 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:09.043278   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:09.108767   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:09.108979   19144 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:09.879104   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:09.950055   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:09.950260   19144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:09.950277   19144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:09.950349   19144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:09.950394   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:10.014719   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:10.014816   19144 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:10.270872   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:10.332785   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:10.332881   19144 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:10.531138   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:10.622511   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:10.622666   19144 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:10.993418   19144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:11.085080   19144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:11.085182   19144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:11.085204   19144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:11.085216   19144 fix.go:57] fixHost completed within 26.687449473s
	I1025 21:33:11.085224   19144 start.go:83] releasing machines lock for "no-preload-213232", held for 26.687490674s
	W1025 21:33:11.085376   19144 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-213232" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p no-preload-213232" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:33:11.128124   19144 out.go:177] 
	W1025 21:33:11.149783   19144 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:33:11.149814   19144 out.go:239] * 
	* 
	W1025 21:33:11.151140   19144 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:33:11.214652   19144 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p no-preload-213232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "e1aea579fdb9c1acd37d4d09e9e947cd0dc2e0fa3494023796e6950031490ac8",
	        "Created": "2022-10-26T04:33:02.62602045Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (111.130646ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:11.425822   19459 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (39.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-213230 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-213230 create -f testdata/busybox.yaml: exit status 1 (33.164371ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-213230" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-213230 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "138a79bcd7014f40625b0396c0cc759a5bf40b1a8bb16f58bf54a2acb78a98e5",
	        "Created": "2022-10-26T04:33:01.367345574Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (111.902304ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:10.342087   19424 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "138a79bcd7014f40625b0396c0cc759a5bf40b1a8bb16f58bf54a2acb78a98e5",
	        "Created": "2022-10-26T04:33:01.367345574Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (112.820673ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:10.519078   19432 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-213230 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-213230 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-213230 describe deploy/metrics-server -n kube-system: exit status 1 (33.291968ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-213230" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-213230 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "138a79bcd7014f40625b0396c0cc759a5bf40b1a8bb16f58bf54a2acb78a98e5",
	        "Created": "2022-10-26T04:33:01.367345574Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (112.054104ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:10.962761   19447 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-213230 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p old-k8s-version-213230 --alsologtostderr -v=3: exit status 82 (14.825334708s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-213230"  ...
	* Stopping node "old-k8s-version-213230"  ...
	* Stopping node "old-k8s-version-213230"  ...
	* Stopping node "old-k8s-version-213230"  ...
	* Stopping node "old-k8s-version-213230"  ...
	* Stopping node "old-k8s-version-213230"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:33:11.013765   19451 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:33:11.013931   19451 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:11.013936   19451 out.go:309] Setting ErrFile to fd 2...
	I1025 21:33:11.013940   19451 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:11.014069   19451 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:33:11.014360   19451 out.go:303] Setting JSON to false
	I1025 21:33:11.014500   19451 mustload.go:65] Loading cluster: old-k8s-version-213230
	I1025 21:33:11.014760   19451 config.go:180] Loaded profile config "old-k8s-version-213230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 21:33:11.014818   19451 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/old-k8s-version-213230/config.json ...
	I1025 21:33:11.015081   19451 mustload.go:65] Loading cluster: old-k8s-version-213230
	I1025 21:33:11.015171   19451 config.go:180] Loaded profile config "old-k8s-version-213230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 21:33:11.015206   19451 stop.go:39] StopHost: old-k8s-version-213230
	I1025 21:33:11.036591   19451 out.go:177] * Stopping node "old-k8s-version-213230"  ...
	I1025 21:33:11.078358   19451 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:11.239793   19451 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:11.239900   19451 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:33:11.239938   19451 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:11.239966   19451 retry.go:31] will retry after 1.104660288s: docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:12.346275   19451 stop.go:39] StopHost: old-k8s-version-213230
	I1025 21:33:12.421717   19451 out.go:177] * Stopping node "old-k8s-version-213230"  ...
	I1025 21:33:12.443232   19451 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:12.504765   19451 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:12.504812   19451 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:33:12.504839   19451 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:12.504856   19451 retry.go:31] will retry after 2.160763633s: docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:14.667743   19451 stop.go:39] StopHost: old-k8s-version-213230
	I1025 21:33:14.690321   19451 out.go:177] * Stopping node "old-k8s-version-213230"  ...
	I1025 21:33:14.733253   19451 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:14.797288   19451 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:14.797328   19451 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:33:14.797343   19451 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:14.797359   19451 retry.go:31] will retry after 2.62026012s: docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:17.419933   19451 stop.go:39] StopHost: old-k8s-version-213230
	I1025 21:33:17.442269   19451 out.go:177] * Stopping node "old-k8s-version-213230"  ...
	I1025 21:33:17.485326   19451 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:17.548986   19451 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:17.549020   19451 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:33:17.549039   19451 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:17.549053   19451 retry.go:31] will retry after 3.164785382s: docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:20.716000   19451 stop.go:39] StopHost: old-k8s-version-213230
	I1025 21:33:20.738534   19451 out.go:177] * Stopping node "old-k8s-version-213230"  ...
	I1025 21:33:20.781323   19451 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:20.845666   19451 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:20.845716   19451 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:33:20.845733   19451 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:20.845750   19451 retry.go:31] will retry after 4.680977329s: docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:25.526904   19451 stop.go:39] StopHost: old-k8s-version-213230
	I1025 21:33:25.549311   19451 out.go:177] * Stopping node "old-k8s-version-213230"  ...
	I1025 21:33:25.571265   19451 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:25.636685   19451 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:25.636721   19451 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	W1025 21:33:25.636736   19451 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:25.657899   19451 out.go:177] 
	W1025 21:33:25.679886   19451 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-213230 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-213230 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:25.679945   19451 out.go:239] * 
	* 
	W1025 21:33:25.683829   19451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:33:25.745669   19451 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p old-k8s-version-213230 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "138a79bcd7014f40625b0396c0cc759a5bf40b1a8bb16f58bf54a2acb78a98e5",
	        "Created": "2022-10-26T04:33:01.367345574Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (111.763583ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:25.965593   19523 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (15.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-213232 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-213232 create -f testdata/busybox.yaml: exit status 1 (32.732314ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-213232" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-213232 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "e1aea579fdb9c1acd37d4d09e9e947cd0dc2e0fa3494023796e6950031490ac8",
	        "Created": "2022-10-26T04:33:02.62602045Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (112.396092ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:11.636350   19468 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "e1aea579fdb9c1acd37d4d09e9e947cd0dc2e0fa3494023796e6950031490ac8",
	        "Created": "2022-10-26T04:33:02.62602045Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (111.934043ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:11.820245   19474 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-213232 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-213232 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-213232 describe deploy/metrics-server -n kube-system: exit status 1 (33.354509ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-213232" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-213232 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "e1aea579fdb9c1acd37d4d09e9e947cd0dc2e0fa3494023796e6950031490ac8",
	        "Created": "2022-10-26T04:33:02.62602045Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (111.568457ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:12.262253   19485 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-213232 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p no-preload-213232 --alsologtostderr -v=3: exit status 82 (14.703263585s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-213232"  ...
	* Stopping node "no-preload-213232"  ...
	* Stopping node "no-preload-213232"  ...
	* Stopping node "no-preload-213232"  ...
	* Stopping node "no-preload-213232"  ...
	* Stopping node "no-preload-213232"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:33:12.312570   19489 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:33:12.312743   19489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:12.312749   19489 out.go:309] Setting ErrFile to fd 2...
	I1025 21:33:12.312753   19489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:12.312874   19489 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:33:12.313182   19489 out.go:303] Setting JSON to false
	I1025 21:33:12.313328   19489 mustload.go:65] Loading cluster: no-preload-213232
	I1025 21:33:12.313611   19489 config.go:180] Loaded profile config "no-preload-213232": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:33:12.313671   19489 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/no-preload-213232/config.json ...
	I1025 21:33:12.313929   19489 mustload.go:65] Loading cluster: no-preload-213232
	I1025 21:33:12.314017   19489 config.go:180] Loaded profile config "no-preload-213232": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:33:12.314047   19489 stop.go:39] StopHost: no-preload-213232
	I1025 21:33:12.336541   19489 out.go:177] * Stopping node "no-preload-213232"  ...
	I1025 21:33:12.379374   19489 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:12.451532   19489 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:12.451587   19489 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:33:12.451603   19489 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:12.451627   19489 retry.go:31] will retry after 1.104660288s: docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:13.556727   19489 stop.go:39] StopHost: no-preload-213232
	I1025 21:33:13.579062   19489 out.go:177] * Stopping node "no-preload-213232"  ...
	I1025 21:33:13.601193   19489 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:13.664835   19489 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:13.664878   19489 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:33:13.664897   19489 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:13.664914   19489 retry.go:31] will retry after 2.160763633s: docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:15.827804   19489 stop.go:39] StopHost: no-preload-213232
	I1025 21:33:15.850443   19489 out.go:177] * Stopping node "no-preload-213232"  ...
	I1025 21:33:15.872208   19489 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:15.936446   19489 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:15.936488   19489 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:33:15.936505   19489 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:15.936520   19489 retry.go:31] will retry after 2.62026012s: docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:18.557443   19489 stop.go:39] StopHost: no-preload-213232
	I1025 21:33:18.579838   19489 out.go:177] * Stopping node "no-preload-213232"  ...
	I1025 21:33:18.622571   19489 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:18.687373   19489 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:18.687407   19489 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:33:18.687422   19489 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:18.687436   19489 retry.go:31] will retry after 3.164785382s: docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:21.854348   19489 stop.go:39] StopHost: no-preload-213232
	I1025 21:33:21.876915   19489 out.go:177] * Stopping node "no-preload-213232"  ...
	I1025 21:33:21.919853   19489 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:21.985045   19489 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:21.985082   19489 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:33:21.985095   19489 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:21.985113   19489 retry.go:31] will retry after 4.680977329s: docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:26.666134   19489 stop.go:39] StopHost: no-preload-213232
	I1025 21:33:26.703037   19489 out.go:177] * Stopping node "no-preload-213232"  ...
	I1025 21:33:26.724134   19489 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:26.816576   19489 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:26.816625   19489 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	W1025 21:33:26.816637   19489 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:26.837863   19489 out.go:177] 
	W1025 21:33:26.858766   19489 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-213232 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-213232 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:26.858782   19489 out.go:239] * 
	* 
	W1025 21:33:26.861295   19489 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:33:26.922947   19489 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p no-preload-213232 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "e1aea579fdb9c1acd37d4d09e9e947cd0dc2e0fa3494023796e6950031490ac8",
	        "Created": "2022-10-26T04:33:02.62602045Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (121.893024ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:27.156215   19556 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (14.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (111.784772ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:26.077606   19527 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-213230 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "138a79bcd7014f40625b0396c0cc759a5bf40b1a8bb16f58bf54a2acb78a98e5",
	        "Created": "2022-10-26T04:33:01.367345574Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (113.536935ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:26.507287   19537 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (62.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-213230 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-213230 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 80 (1m1.924925048s)

                                                
                                                
-- stdout --
	* [old-k8s-version-213230] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-213230 in cluster old-k8s-version-213230
	* Pulling base image ...
	* docker "old-k8s-version-213230" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-213230" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:33:26.557362   19543 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:33:26.557497   19543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:26.557502   19543 out.go:309] Setting ErrFile to fd 2...
	I1025 21:33:26.557511   19543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:26.557621   19543 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:33:26.558101   19543 out.go:303] Setting JSON to false
	I1025 21:33:26.572620   19543 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5575,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:33:26.572717   19543 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:33:26.594312   19543 out.go:177] * [old-k8s-version-213230] minikube v1.27.1 on Darwin 12.6
	I1025 21:33:26.616376   19543 notify.go:220] Checking for updates...
	I1025 21:33:26.616395   19543 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:33:26.638309   19543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:33:26.659843   19543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:33:26.703014   19543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:33:26.744986   19543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:33:26.766254   19543 config.go:180] Loaded profile config "old-k8s-version-213230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 21:33:26.787774   19543 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1025 21:33:26.809163   19543 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:33:26.968320   19543 docker.go:137] docker version: linux-20.10.17
	I1025 21:33:26.968508   19543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:33:27.103555   19543 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:40 SystemTime:2022-10-26 04:33:27.041418879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:33:27.125203   19543 out.go:177] * Using the docker driver based on existing profile
	I1025 21:33:27.146447   19543 start.go:282] selected driver: docker
	I1025 21:33:27.146482   19543 start.go:808] validating driver "docker" against &{Name:old-k8s-version-213230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-213230 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:33:27.146632   19543 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:33:27.149907   19543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:33:27.282387   19543 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:33:27.22320353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:33:27.282530   19543 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:33:27.282557   19543 cni.go:95] Creating CNI manager for ""
	I1025 21:33:27.282566   19543 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:33:27.282576   19543 start_flags.go:317] config:
	{Name:old-k8s-version-213230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-213230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:33:27.324963   19543 out.go:177] * Starting control plane node old-k8s-version-213230 in cluster old-k8s-version-213230
	I1025 21:33:27.346379   19543 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:33:27.368369   19543 out.go:177] * Pulling base image ...
	I1025 21:33:27.412356   19543 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 21:33:27.412402   19543 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:33:27.412463   19543 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 21:33:27.412502   19543 cache.go:57] Caching tarball of preloaded images
	I1025 21:33:27.412734   19543 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:33:27.412760   19543 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 21:33:27.413807   19543 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/old-k8s-version-213230/config.json ...
	I1025 21:33:27.476612   19543 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:33:27.476632   19543 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:33:27.476641   19543 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:33:27.476695   19543 start.go:364] acquiring machines lock for old-k8s-version-213230: {Name:mkf15d742925eff5dfa273d5f3f97b7bc6f95cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:27.476779   19543 start.go:368] acquired machines lock for "old-k8s-version-213230" in 64.463µs
	I1025 21:33:27.476795   19543 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:33:27.476804   19543 fix.go:55] fixHost starting: 
	I1025 21:33:27.477012   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:27.586028   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:27.586075   19543 fix.go:103] recreateIfNeeded on old-k8s-version-213230: state= err=unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:27.586098   19543 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:33:27.628615   19543 out.go:177] * docker "old-k8s-version-213230" container is missing, will recreate.
	I1025 21:33:27.650439   19543 delete.go:124] DEMOLISHING old-k8s-version-213230 ...
	I1025 21:33:27.650647   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:27.713895   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:27.713935   19543 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:27.713950   19543 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:27.714287   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:27.777087   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:27.777132   19543 delete.go:82] Unable to get host status for old-k8s-version-213230, assuming it has already been deleted: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:27.777211   19543 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-213230
	W1025 21:33:27.839333   19543 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-213230 returned with exit code 1
	I1025 21:33:27.839368   19543 kic.go:356] could not find the container old-k8s-version-213230 to remove it. will try anyways
	I1025 21:33:27.839459   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:27.901970   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:27.902034   19543 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:27.902089   19543 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0"
	W1025 21:33:28.065302   19543 cli_runner.go:211] docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:33:28.065342   19543 oci.go:646] error shutdown old-k8s-version-213230: docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:29.065457   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:29.125365   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:29.125402   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:29.125411   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:29.125439   19543 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:29.680193   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:29.743355   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:29.743405   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:29.743415   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:29.743437   19543 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:30.824396   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:30.887510   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:30.887555   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:30.887566   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:30.887586   19543 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:32.198058   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:32.263632   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:32.263674   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:32.263684   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:32.263702   19543 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:33.847279   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:33.912237   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:33.912278   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:33.912289   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:33.912311   19543 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:36.253309   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:36.317467   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:36.317512   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:36.317525   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:36.317573   19543 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:40.826221   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:40.891379   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:40.891436   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:40.891446   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:40.891469   19543 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:44.114005   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:44.177871   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:44.177915   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:44.177926   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:33:44.177954   19543 oci.go:88] couldn't shut down old-k8s-version-213230 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	 
	I1025 21:33:44.178015   19543 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-213230
	I1025 21:33:44.242489   19543 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-213230
	W1025 21:33:44.302520   19543 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-213230 returned with exit code 1
	I1025 21:33:44.302636   19543 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:44.364271   19543 cli_runner.go:164] Run: docker network rm old-k8s-version-213230
	W1025 21:33:44.474022   19543 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:33:44.474039   19543 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:33:45.475581   19543 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:33:45.519083   19543 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:33:45.519298   19543 start.go:159] libmachine.API.Create for "old-k8s-version-213230" (driver="docker")
	I1025 21:33:45.519354   19543 client.go:168] LocalClient.Create starting
	I1025 21:33:45.519485   19543 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:33:45.519548   19543 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:45.519594   19543 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:45.519705   19543 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:33:45.519752   19543 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:45.519768   19543 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:45.520370   19543 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:33:45.585196   19543 cli_runner.go:211] docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:33:45.585271   19543 network_create.go:272] running [docker network inspect old-k8s-version-213230] to gather additional debugging logs...
	I1025 21:33:45.585289   19543 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230
	W1025 21:33:45.646558   19543 cli_runner.go:211] docker network inspect old-k8s-version-213230 returned with exit code 1
	I1025 21:33:45.646580   19543 network_create.go:275] error running [docker network inspect old-k8s-version-213230]: docker network inspect old-k8s-version-213230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-213230
	I1025 21:33:45.646595   19543 network_create.go:277] output of [docker network inspect old-k8s-version-213230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-213230
	
	** /stderr **
	I1025 21:33:45.646698   19543 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:45.707920   19543 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a329b8] misses:0}
	I1025 21:33:45.707959   19543 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:45.707972   19543 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:33:45.708039   19543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	W1025 21:33:45.769675   19543 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230 returned with exit code 1
	W1025 21:33:45.769707   19543 network_create.go:107] failed to create docker network old-k8s-version-213230 192.168.49.0/24, will retry: subnet is taken
	I1025 21:33:45.769966   19543 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8] amended:false}} dirty:map[] misses:0}
	I1025 21:33:45.769983   19543 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:45.770173   19543 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8] amended:true}} dirty:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098] misses:0}
	I1025 21:33:45.770186   19543 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:45.770194   19543 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:33:45.770250   19543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	W1025 21:33:45.830680   19543 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230 returned with exit code 1
	W1025 21:33:45.830718   19543 network_create.go:107] failed to create docker network old-k8s-version-213230 192.168.58.0/24, will retry: subnet is taken
	I1025 21:33:45.830979   19543 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8] amended:true}} dirty:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098] misses:1}
	I1025 21:33:45.830998   19543 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:45.831198   19543 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8] amended:true}} dirty:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098 192.168.67.0:0xc000c00030] misses:1}
	I1025 21:33:45.831208   19543 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:45.831216   19543 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:33:45.831289   19543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	I1025 21:33:45.921908   19543 network_create.go:99] docker network old-k8s-version-213230 192.168.67.0/24 created
	I1025 21:33:45.921955   19543 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-213230" container
	I1025 21:33:45.922048   19543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:33:45.985433   19543 cli_runner.go:164] Run: docker volume create old-k8s-version-213230 --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:33:46.045945   19543 oci.go:103] Successfully created a docker volume old-k8s-version-213230
	I1025 21:33:46.046067   19543 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:33:46.184821   19543 cli_runner.go:211] docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:33:46.184872   19543 client.go:171] LocalClient.Create took 665.503648ms
	I1025 21:33:48.187077   19543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:48.187192   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:48.304432   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:48.304555   19543 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:48.454269   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:48.518096   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:48.518178   19543 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:48.820801   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:48.886977   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:48.887071   19543 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:49.460432   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:49.523072   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:49.523166   19543 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:49.523180   19543 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:49.523228   19543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:49.523267   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:49.584796   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:49.584885   19543 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:49.765804   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:49.826972   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:49.827056   19543 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:50.157713   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:50.219664   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:50.219763   19543 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:50.680257   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:50.742405   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:50.742502   19543 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:50.742520   19543 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:50.742528   19543 start.go:128] duration metric: createHost completed in 5.266912142s
	I1025 21:33:50.742610   19543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:50.742653   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:50.802841   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:50.802953   19543 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:51.001043   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:51.063311   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:51.063404   19543 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:51.361192   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:51.424381   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:51.424469   19543 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:52.090073   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:52.152469   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:52.152571   19543 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:52.152611   19543 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:52.152666   19543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:52.152713   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:52.213179   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:52.213261   19543 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:52.391297   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:52.453636   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:52.453731   19543 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:52.777121   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:52.840214   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:33:52.840302   19543 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:53.443496   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:33:53.504774   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:33:53.504875   19543 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:33:53.504897   19543 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:53.504905   19543 fix.go:57] fixHost completed within 26.028020036s
	I1025 21:33:53.504912   19543 start.go:83] releasing machines lock for "old-k8s-version-213230", held for 26.028042871s
	W1025 21:33:53.504925   19543 start.go:603] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W1025 21:33:53.505079   19543 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:33:53.505088   19543 start.go:618] Will try again in 5 seconds ...
	I1025 21:33:58.507435   19543 start.go:364] acquiring machines lock for old-k8s-version-213230: {Name:mkf15d742925eff5dfa273d5f3f97b7bc6f95cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:58.507580   19543 start.go:368] acquired machines lock for "old-k8s-version-213230" in 109.419µs
	I1025 21:33:58.507622   19543 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:33:58.507629   19543 fix.go:55] fixHost starting: 
	I1025 21:33:58.507985   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:58.571766   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:58.571818   19543 fix.go:103] recreateIfNeeded on old-k8s-version-213230: state= err=unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:58.571834   19543 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:33:58.616386   19543 out.go:177] * docker "old-k8s-version-213230" container is missing, will recreate.
	I1025 21:33:58.638208   19543 delete.go:124] DEMOLISHING old-k8s-version-213230 ...
	I1025 21:33:58.638439   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:58.701889   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:58.701934   19543 stop.go:75] unable to get state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:58.701961   19543 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:58.702327   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:58.763105   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:58.763147   19543 delete.go:82] Unable to get host status for old-k8s-version-213230, assuming it has already been deleted: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:58.763227   19543 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-213230
	W1025 21:33:58.823503   19543 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-213230 returned with exit code 1
	I1025 21:33:58.823533   19543 kic.go:356] could not find the container old-k8s-version-213230 to remove it. will try anyways
	I1025 21:33:58.823605   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:33:58.883116   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:58.883154   19543 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:58.883218   19543 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0"
	W1025 21:33:58.944324   19543 cli_runner.go:211] docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:33:58.944347   19543 oci.go:646] error shutdown old-k8s-version-213230: docker exec --privileged -t old-k8s-version-213230 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:33:59.946476   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:00.009648   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:00.009698   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:00.009709   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:00.009731   19543 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:00.408634   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:00.475092   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:00.475136   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:00.475150   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:00.475174   19543 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:01.075349   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:01.141411   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:01.141465   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:01.141482   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:01.141514   19543 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:02.553042   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:02.617035   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:02.617085   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:02.617100   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:02.617119   19543 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:03.809803   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:03.872711   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:03.872756   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:03.872774   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:03.872798   19543 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:07.330275   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:07.395918   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:07.395969   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:07.395982   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:07.396005   19543 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:11.940419   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:12.004824   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:12.004873   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:12.004887   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:12.004912   19543 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:17.838275   19543 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:17.903272   19543 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:17.903319   19543 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:17.903331   19543 oci.go:660] temporary error: container old-k8s-version-213230 status is  but expect it to be exited
	I1025 21:34:17.903359   19543 oci.go:88] couldn't shut down old-k8s-version-213230 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	 
	I1025 21:34:17.903417   19543 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-213230
	I1025 21:34:17.966419   19543 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-213230
	W1025 21:34:18.027856   19543 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-213230 returned with exit code 1
	I1025 21:34:18.027949   19543 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:34:18.089246   19543 cli_runner.go:164] Run: docker network rm old-k8s-version-213230
	W1025 21:34:18.195961   19543 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:34:18.195978   19543 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:34:19.197915   19543 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:34:19.219789   19543 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:34:19.219896   19543 start.go:159] libmachine.API.Create for "old-k8s-version-213230" (driver="docker")
	I1025 21:34:19.219925   19543 client.go:168] LocalClient.Create starting
	I1025 21:34:19.220003   19543 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:34:19.220043   19543 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:19.220056   19543 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:19.220099   19543 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:34:19.220128   19543 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:19.220139   19543 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:19.241350   19543 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:34:19.305760   19543 cli_runner.go:211] docker network inspect old-k8s-version-213230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:34:19.305831   19543 network_create.go:272] running [docker network inspect old-k8s-version-213230] to gather additional debugging logs...
	I1025 21:34:19.305850   19543 cli_runner.go:164] Run: docker network inspect old-k8s-version-213230
	W1025 21:34:19.366951   19543 cli_runner.go:211] docker network inspect old-k8s-version-213230 returned with exit code 1
	I1025 21:34:19.366971   19543 network_create.go:275] error running [docker network inspect old-k8s-version-213230]: docker network inspect old-k8s-version-213230: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-213230
	I1025 21:34:19.366994   19543 network_create.go:277] output of [docker network inspect old-k8s-version-213230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-213230
	
	** /stderr **
	I1025 21:34:19.367058   19543 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:34:19.427535   19543 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8] amended:true}} dirty:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098 192.168.67.0:0xc000c00030] misses:1}
	I1025 21:34:19.427563   19543 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:19.427757   19543 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8] amended:true}} dirty:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098 192.168.67.0:0xc000c00030] misses:2}
	I1025 21:34:19.427765   19543 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:19.427997   19543 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098 192.168.67.0:0xc000c00030] amended:false}} dirty:map[] misses:0}
	I1025 21:34:19.428009   19543 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:19.428206   19543 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098 192.168.67.0:0xc000c00030] amended:true}} dirty:map[192.168.49.0:0xc000a329b8 192.168.58.0:0xc000a32098 192.168.67.0:0xc000c00030 192.168.76.0:0xc0001502d0] misses:0}
	I1025 21:34:19.428222   19543 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:19.428232   19543 network_create.go:115] attempt to create docker network old-k8s-version-213230 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:34:19.428295   19543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-213230 old-k8s-version-213230
	I1025 21:34:19.519357   19543 network_create.go:99] docker network old-k8s-version-213230 192.168.76.0/24 created
	I1025 21:34:19.519424   19543 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-213230" container
	I1025 21:34:19.519522   19543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:34:19.580862   19543 cli_runner.go:164] Run: docker volume create old-k8s-version-213230 --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:34:19.641803   19543 oci.go:103] Successfully created a docker volume old-k8s-version-213230
	I1025 21:34:19.641897   19543 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:34:19.780545   19543 cli_runner.go:211] docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:34:19.780585   19543 client.go:171] LocalClient.Create took 560.653913ms
	I1025 21:34:21.782691   19543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:34:21.782812   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:21.846845   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:21.846933   19543 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:22.012638   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:22.078536   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:22.078662   19543 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:22.496186   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:22.561220   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:22.561306   19543 retry.go:31] will retry after 829.823411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:23.391451   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:23.454125   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:34:23.454222   19543 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:34:23.454241   19543 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:23.454286   19543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:34:23.454373   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:23.514640   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:23.514717   19543 retry.go:31] will retry after 273.70215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:23.788540   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:23.849110   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:23.849195   19543 retry.go:31] will retry after 209.670244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:24.060466   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:24.124991   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:24.125084   19543 retry.go:31] will retry after 670.513831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:24.797940   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:24.860883   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:34:24.860977   19543 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:34:24.861003   19543 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:24.861014   19543 start.go:128] duration metric: createHost completed in 5.663067092s
	I1025 21:34:24.861075   19543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:34:24.861111   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:24.920968   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:24.921060   19543 retry.go:31] will retry after 168.316559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:25.089539   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:25.150078   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:25.150165   19543 retry.go:31] will retry after 390.412446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:25.541575   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:25.607564   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:25.607667   19543 retry.go:31] will retry after 587.33751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:26.197369   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:26.259391   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:34:26.259515   19543 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:34:26.259533   19543 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:26.259582   19543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:34:26.259623   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:26.319560   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:26.319652   19543 retry.go:31] will retry after 230.78805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:26.550797   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:26.614494   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:26.614595   19543 retry.go:31] will retry after 386.469643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:27.001286   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:27.065184   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:27.065311   19543 retry.go:31] will retry after 423.866531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:27.489334   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:27.552563   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	I1025 21:34:27.552661   19543 retry.go:31] will retry after 659.880839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:28.214916   19543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230
	W1025 21:34:28.280890   19543 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230 returned with exit code 1
	W1025 21:34:28.280979   19543 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:34:28.280997   19543 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-213230": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-213230: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	I1025 21:34:28.281005   19543 fix.go:57] fixHost completed within 29.773280796s
	I1025 21:34:28.281012   19543 start.go:83] releasing machines lock for "old-k8s-version-213230", held for 29.773325321s
	W1025 21:34:28.281185   19543 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-213230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-213230" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:34:28.324237   19543 out.go:177] 
	W1025 21:34:28.345528   19543 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-213230 container: docker run --rm --name old-k8s-version-213230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-213230 --entrypoint /usr/bin/test -v old-k8s-version-213230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:34:28.345565   19543 out.go:239] * 
	* 
	W1025 21:34:28.346983   19543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:34:28.409378   19543 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-213230 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "0e9988291015c86e59eebb1e91026fbaaa45e5a2ab3f5a89379387a294619e4a",
	        "Created": "2022-10-26T04:34:19.4942893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (112.313622ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:28.625934   19999 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (62.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (114.125578ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:27.270492   19566 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-213232 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "e1aea579fdb9c1acd37d4d09e9e947cd0dc2e0fa3494023796e6950031490ac8",
	        "Created": "2022-10-26T04:33:02.62602045Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (116.258797ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:33:27.854623   19589 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-213232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-213232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (1m1.217586859s)

                                                
                                                
-- stdout --
	* [no-preload-213232] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-213232 in cluster no-preload-213232
	* Pulling base image ...
	* docker "no-preload-213232" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-213232" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:33:27.907119   19597 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:33:27.907245   19597 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:27.907250   19597 out.go:309] Setting ErrFile to fd 2...
	I1025 21:33:27.907256   19597 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:33:27.907359   19597 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:33:27.907818   19597 out.go:303] Setting JSON to false
	I1025 21:33:27.924220   19597 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5576,"bootTime":1666753231,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:33:27.924333   19597 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:33:27.946224   19597 out.go:177] * [no-preload-213232] minikube v1.27.1 on Darwin 12.6
	I1025 21:33:27.989634   19597 notify.go:220] Checking for updates...
	I1025 21:33:28.011192   19597 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:33:28.053417   19597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:33:28.095797   19597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:33:28.117761   19597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:33:28.139594   19597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:33:28.162218   19597 config.go:180] Loaded profile config "no-preload-213232": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:33:28.162816   19597 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:33:28.231230   19597 docker.go:137] docker version: linux-20.10.17
	I1025 21:33:28.231412   19597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:33:28.361075   19597 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:33:28.303030719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:33:28.404866   19597 out.go:177] * Using the docker driver based on existing profile
	I1025 21:33:28.426885   19597 start.go:282] selected driver: docker
	I1025 21:33:28.426908   19597 start.go:808] validating driver "docker" against &{Name:no-preload-213232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-213232 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:33:28.427053   19597 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:33:28.430374   19597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:33:28.556993   19597 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:33:28.501058144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:33:28.557140   19597 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:33:28.557161   19597 cni.go:95] Creating CNI manager for ""
	I1025 21:33:28.557171   19597 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:33:28.557181   19597 start_flags.go:317] config:
	{Name:no-preload-213232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-213232 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:33:28.601921   19597 out.go:177] * Starting control plane node no-preload-213232 in cluster no-preload-213232
	I1025 21:33:28.623889   19597 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:33:28.645953   19597 out.go:177] * Pulling base image ...
	I1025 21:33:28.689959   19597 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:33:28.689972   19597 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:33:28.690163   19597 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/no-preload-213232/config.json ...
	I1025 21:33:28.691675   19597 cache.go:107] acquiring lock: {Name:mk9496eca59ca8d1cbd01dfb5f76b68b912ca8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692842   19597 cache.go:107] acquiring lock: {Name:mk407eb5c2bca35e6b95c015dbc18b7fb7a7319d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692851   19597 cache.go:107] acquiring lock: {Name:mk86cad4351afc40081233a18a264f52ef6cc915 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692579   19597 cache.go:107] acquiring lock: {Name:mke6a74e8037e86be3f77efd2f3ae0ed51bdab2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692598   19597 cache.go:107] acquiring lock: {Name:mk9254c163158bba7ec1e073185cdb240af77bd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692526   19597 cache.go:107] acquiring lock: {Name:mk76e961842b5a32a7ee23f80ba0702d856358cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.691731   19597 cache.go:107] acquiring lock: {Name:mke6e041073c846ecb833c53066e7029cc1b89cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692962   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists
	I1025 21:33:28.692975   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I1025 21:33:28.692787   19597 cache.go:107] acquiring lock: {Name:mkbb578537582a40800c4eeced6a7027b4a94c0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.692995   19597 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 1.021347ms
	I1025 21:33:28.693014   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists
	I1025 21:33:28.693013   19597 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I1025 21:33:28.693016   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I1025 21:33:28.693025   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists
	I1025 21:33:28.693028   19597 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 1.283493ms
	I1025 21:33:28.693039   19597 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 1.3537ms
	I1025 21:33:28.693039   19597 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 416.106µs
	I1025 21:33:28.693046   19597 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded
	I1025 21:33:28.693012   19597 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 1.759948ms
	I1025 21:33:28.693053   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I1025 21:33:28.693061   19597 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded
	I1025 21:33:28.693069   19597 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 1.096328ms
	I1025 21:33:28.693077   19597 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I1025 21:33:28.693048   19597 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded
	I1025 21:33:28.693055   19597 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I1025 21:33:28.693051   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 21:33:28.693088   19597 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.621147ms
	I1025 21:33:28.693095   19597 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 21:33:28.693110   19597 cache.go:115] /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists
	I1025 21:33:28.693119   19597 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 1.652617ms
	I1025 21:33:28.693129   19597 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded
	I1025 21:33:28.693137   19597 cache.go:87] Successfully saved all images to host disk.
	I1025 21:33:28.753415   19597 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:33:28.753437   19597 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:33:28.753446   19597 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:33:28.753491   19597 start.go:364] acquiring machines lock for no-preload-213232: {Name:mk0ecc979bb14f8cd1ca75a3ba2690326b8c6623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:28.753560   19597 start.go:368] acquired machines lock for "no-preload-213232" in 58.891µs
	I1025 21:33:28.753578   19597 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:33:28.753586   19597 fix.go:55] fixHost starting: 
	I1025 21:33:28.753801   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:28.813808   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:28.813864   19597 fix.go:103] recreateIfNeeded on no-preload-213232: state= err=unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:28.813886   19597 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:33:28.835941   19597 out.go:177] * docker "no-preload-213232" container is missing, will recreate.
	I1025 21:33:28.878492   19597 delete.go:124] DEMOLISHING no-preload-213232 ...
	I1025 21:33:28.878688   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:28.940437   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:28.940481   19597 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:28.940506   19597 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:28.940866   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:29.001645   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:29.001710   19597 delete.go:82] Unable to get host status for no-preload-213232, assuming it has already been deleted: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:29.001779   19597 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-213232
	W1025 21:33:29.062131   19597 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-213232 returned with exit code 1
	I1025 21:33:29.062160   19597 kic.go:356] could not find the container no-preload-213232 to remove it. will try anyways
	I1025 21:33:29.062228   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:29.123973   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:29.124015   19597 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:29.124087   19597 cli_runner.go:164] Run: docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0"
	W1025 21:33:29.184362   19597 cli_runner.go:211] docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:33:29.184387   19597 oci.go:646] error shutdown no-preload-213232: docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:30.186793   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:30.251087   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:30.251148   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:30.251159   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:30.251197   19597 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:30.804809   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:30.870623   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:30.870691   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:30.870705   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:30.870723   19597 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:31.953506   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:32.020080   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:32.020140   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:32.020152   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:32.020173   19597 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:33.331527   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:33.395726   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:33.395767   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:33.395785   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:33.395805   19597 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:34.979754   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:35.044178   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:35.044261   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:35.044274   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:35.044302   19597 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:37.387174   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:37.450767   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:37.450807   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:37.450823   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:37.450847   19597 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:41.957717   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:42.022163   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:42.022210   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:42.022227   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:42.022246   19597 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:45.246152   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:45.310810   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:45.310855   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:45.310868   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:33:45.310915   19597 oci.go:88] couldn't shut down no-preload-213232 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	 
	I1025 21:33:45.310986   19597 cli_runner.go:164] Run: docker rm -f -v no-preload-213232
	I1025 21:33:45.373915   19597 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-213232
	W1025 21:33:45.434595   19597 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-213232 returned with exit code 1
	I1025 21:33:45.434712   19597 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:45.527285   19597 cli_runner.go:164] Run: docker network rm no-preload-213232
	W1025 21:33:45.639568   19597 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:33:45.639683   19597 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:33:46.641833   19597 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:33:46.664195   19597 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:33:46.664381   19597 start.go:159] libmachine.API.Create for "no-preload-213232" (driver="docker")
	I1025 21:33:46.664447   19597 client.go:168] LocalClient.Create starting
	I1025 21:33:46.664603   19597 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:33:46.664671   19597 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:46.664703   19597 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:46.664804   19597 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:33:46.664849   19597 main.go:134] libmachine: Decoding PEM data...
	I1025 21:33:46.664865   19597 main.go:134] libmachine: Parsing certificate...
	I1025 21:33:46.665511   19597 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:33:46.729613   19597 cli_runner.go:211] docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:33:46.729710   19597 network_create.go:272] running [docker network inspect no-preload-213232] to gather additional debugging logs...
	I1025 21:33:46.729728   19597 cli_runner.go:164] Run: docker network inspect no-preload-213232
	W1025 21:33:46.790583   19597 cli_runner.go:211] docker network inspect no-preload-213232 returned with exit code 1
	I1025 21:33:46.790604   19597 network_create.go:275] error running [docker network inspect no-preload-213232]: docker network inspect no-preload-213232: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-213232
	I1025 21:33:46.790616   19597 network_create.go:277] output of [docker network inspect no-preload-213232]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-213232
	
	** /stderr **
	I1025 21:33:46.790685   19597 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:33:46.851216   19597 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000013378] misses:0}
	I1025 21:33:46.851257   19597 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:46.851277   19597 network_create.go:115] attempt to create docker network no-preload-213232 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:33:46.851351   19597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	W1025 21:33:46.912822   19597 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232 returned with exit code 1
	W1025 21:33:46.912881   19597 network_create.go:107] failed to create docker network no-preload-213232 192.168.49.0/24, will retry: subnet is taken
	I1025 21:33:46.913161   19597 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:false}} dirty:map[] misses:0}
	I1025 21:33:46.913179   19597 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:46.913379   19597 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0] misses:0}
	I1025 21:33:46.913402   19597 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:46.913413   19597 network_create.go:115] attempt to create docker network no-preload-213232 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:33:46.913478   19597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	W1025 21:33:46.974650   19597 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232 returned with exit code 1
	W1025 21:33:46.974706   19597 network_create.go:107] failed to create docker network no-preload-213232 192.168.58.0/24, will retry: subnet is taken
	I1025 21:33:46.974979   19597 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0] misses:1}
	I1025 21:33:46.975024   19597 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:46.975243   19597 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0] misses:1}
	I1025 21:33:46.975257   19597 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:46.975264   19597 network_create.go:115] attempt to create docker network no-preload-213232 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:33:46.975322   19597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	W1025 21:33:47.036826   19597 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232 returned with exit code 1
	W1025 21:33:47.036886   19597 network_create.go:107] failed to create docker network no-preload-213232 192.168.67.0/24, will retry: subnet is taken
	I1025 21:33:47.037159   19597 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0] misses:2}
	I1025 21:33:47.037176   19597 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:47.037380   19597 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8] misses:2}
	I1025 21:33:47.037391   19597 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:33:47.037400   19597 network_create.go:115] attempt to create docker network no-preload-213232 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:33:47.037454   19597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	I1025 21:33:47.127359   19597 network_create.go:99] docker network no-preload-213232 192.168.76.0/24 created
	I1025 21:33:47.127400   19597 kic.go:106] calculated static IP "192.168.76.2" for the "no-preload-213232" container
	I1025 21:33:47.127495   19597 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:33:47.190079   19597 cli_runner.go:164] Run: docker volume create no-preload-213232 --label name.minikube.sigs.k8s.io=no-preload-213232 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:33:47.252074   19597 oci.go:103] Successfully created a docker volume no-preload-213232
	I1025 21:33:47.252183   19597 cli_runner.go:164] Run: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:33:47.379503   19597 cli_runner.go:211] docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:33:47.379558   19597 client.go:171] LocalClient.Create took 715.096844ms
	I1025 21:33:49.380947   19597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:49.381107   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:49.443848   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:49.443945   19597 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:49.595018   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:49.656448   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:49.656528   19597 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:49.959263   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:50.021148   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:50.021227   19597 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:50.594471   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:50.659600   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:50.659693   19597 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:50.659755   19597 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:50.659805   19597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:50.659861   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:50.722192   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:50.722269   19597 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:50.903157   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:50.967692   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:50.967770   19597 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:51.300292   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:51.366947   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:51.367059   19597 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:51.827569   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:51.893021   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:51.893111   19597 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:51.893126   19597 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:51.893144   19597 start.go:128] duration metric: createHost completed in 5.25119325s
	I1025 21:33:51.893202   19597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:33:51.893240   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:51.953694   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:51.953791   19597 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:52.149838   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:52.213179   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:52.213271   19597 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:52.512590   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:52.574328   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:52.574441   19597 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:53.238268   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:53.301076   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:53.301161   19597 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:53.301178   19597 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:53.301258   19597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:33:53.301294   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:53.362436   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:53.362511   19597 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:53.538549   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:53.600044   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:53.600144   19597 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:53.924170   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:53.988997   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:33:53.989076   19597 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:54.593749   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:33:54.657503   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:33:54.657585   19597 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:33:54.657624   19597 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:54.657655   19597 fix.go:57] fixHost completed within 25.90398359s
	I1025 21:33:54.657662   19597 start.go:83] releasing machines lock for "no-preload-213232", held for 25.904012443s
	W1025 21:33:54.657677   19597 start.go:603] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W1025 21:33:54.657830   19597 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:33:54.657839   19597 start.go:618] Will try again in 5 seconds ...
	I1025 21:33:59.660166   19597 start.go:364] acquiring machines lock for no-preload-213232: {Name:mk0ecc979bb14f8cd1ca75a3ba2690326b8c6623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:33:59.660316   19597 start.go:368] acquired machines lock for "no-preload-213232" in 116.229µs
	I1025 21:33:59.660351   19597 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:33:59.660359   19597 fix.go:55] fixHost starting: 
	I1025 21:33:59.660729   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:59.723174   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:59.723225   19597 fix.go:103] recreateIfNeeded on no-preload-213232: state= err=unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:59.723239   19597 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:33:59.745060   19597 out.go:177] * docker "no-preload-213232" container is missing, will recreate.
	I1025 21:33:59.789101   19597 delete.go:124] DEMOLISHING no-preload-213232 ...
	I1025 21:33:59.789304   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:59.850797   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:33:59.850833   19597 stop.go:75] unable to get state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:59.850846   19597 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:59.851225   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:33:59.911657   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:33:59.911700   19597 delete.go:82] Unable to get host status for no-preload-213232, assuming it has already been deleted: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:33:59.911763   19597 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-213232
	W1025 21:33:59.972879   19597 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-213232 returned with exit code 1
	I1025 21:33:59.972907   19597 kic.go:356] could not find the container no-preload-213232 to remove it. will try anyways
	I1025 21:33:59.972985   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:00.033980   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	W1025 21:34:00.034023   19597 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:00.034085   19597 cli_runner.go:164] Run: docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0"
	W1025 21:34:00.094881   19597 cli_runner.go:211] docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:34:00.094905   19597 oci.go:646] error shutdown no-preload-213232: docker exec --privileged -t no-preload-213232 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:01.097006   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:01.159307   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:01.159347   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:01.159360   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:01.159377   19597 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:01.558317   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:01.622113   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:01.622156   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:01.622169   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:01.622191   19597 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:02.220445   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:02.285538   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:02.285598   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:02.285612   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:02.285633   19597 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:03.697184   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:03.762267   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:03.762309   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:03.762320   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:03.762344   19597 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:04.957087   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:05.023061   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:05.023102   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:05.023112   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:05.023132   19597 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:08.481563   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:08.543874   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:08.543919   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:08.543943   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:08.543965   19597 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:13.090090   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:13.154820   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:13.154872   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:13.154898   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:13.154925   19597 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:18.986971   19597 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:19.050548   19597 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:19.050589   19597 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:19.050601   19597 oci.go:660] temporary error: container no-preload-213232 status is  but expect it to be exited
	I1025 21:34:19.050626   19597 oci.go:88] couldn't shut down no-preload-213232 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	 
	I1025 21:34:19.050698   19597 cli_runner.go:164] Run: docker rm -f -v no-preload-213232
	I1025 21:34:19.115431   19597 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-213232
	W1025 21:34:19.175153   19597 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-213232 returned with exit code 1
	I1025 21:34:19.175248   19597 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:34:19.250133   19597 cli_runner.go:164] Run: docker network rm no-preload-213232
	W1025 21:34:19.363623   19597 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:34:19.363642   19597 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:34:20.365765   19597 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:34:20.390669   19597 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:34:20.390760   19597 start.go:159] libmachine.API.Create for "no-preload-213232" (driver="docker")
	I1025 21:34:20.390777   19597 client.go:168] LocalClient.Create starting
	I1025 21:34:20.390902   19597 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:34:20.390946   19597 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:20.390959   19597 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:20.391001   19597 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:34:20.391025   19597 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:20.391036   19597 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:20.411141   19597 cli_runner.go:164] Run: docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:34:20.473839   19597 cli_runner.go:211] docker network inspect no-preload-213232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:34:20.473912   19597 network_create.go:272] running [docker network inspect no-preload-213232] to gather additional debugging logs...
	I1025 21:34:20.473926   19597 cli_runner.go:164] Run: docker network inspect no-preload-213232
	W1025 21:34:20.536329   19597 cli_runner.go:211] docker network inspect no-preload-213232 returned with exit code 1
	I1025 21:34:20.536351   19597 network_create.go:275] error running [docker network inspect no-preload-213232]: docker network inspect no-preload-213232: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-213232
	I1025 21:34:20.536372   19597 network_create.go:277] output of [docker network inspect no-preload-213232]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-213232
	
	** /stderr **
	I1025 21:34:20.536455   19597 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:34:20.596818   19597 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8] misses:2}
	I1025 21:34:20.596853   19597 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:20.597046   19597 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8] misses:3}
	I1025 21:34:20.597055   19597 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:20.597267   19597 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8] amended:false}} dirty:map[] misses:0}
	I1025 21:34:20.597276   19597 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:20.597478   19597 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8] amended:false}} dirty:map[] misses:0}
	I1025 21:34:20.597487   19597 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:20.597703   19597 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8] amended:true}} dirty:map[192.168.49.0:0xc000013378 192.168.58.0:0xc000b220d0 192.168.67.0:0xc0000133b0 192.168.76.0:0xc0000133e8 192.168.85.0:0xc000418370] misses:0}
	I1025 21:34:20.597716   19597 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:20.597723   19597 network_create.go:115] attempt to create docker network no-preload-213232 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:34:20.597792   19597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-213232 no-preload-213232
	I1025 21:34:20.689045   19597 network_create.go:99] docker network no-preload-213232 192.168.85.0/24 created
	I1025 21:34:20.689082   19597 kic.go:106] calculated static IP "192.168.85.2" for the "no-preload-213232" container
	I1025 21:34:20.689184   19597 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:34:20.751592   19597 cli_runner.go:164] Run: docker volume create no-preload-213232 --label name.minikube.sigs.k8s.io=no-preload-213232 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:34:20.813310   19597 oci.go:103] Successfully created a docker volume no-preload-213232
	I1025 21:34:20.813439   19597 cli_runner.go:164] Run: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:34:20.961588   19597 cli_runner.go:211] docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:34:20.961636   19597 client.go:171] LocalClient.Create took 570.851964ms
	I1025 21:34:22.963134   19597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:34:22.963253   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:23.025928   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:23.026014   19597 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:23.191772   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:23.309988   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:23.310072   19597 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:23.725792   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:23.790714   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:23.790804   19597 retry.go:31] will retry after 829.823411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:24.621406   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:24.686820   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:34:24.686920   19597 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:34:24.686937   19597 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:24.686989   19597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:34:24.687031   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:24.748042   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:24.748121   19597 retry.go:31] will retry after 273.70215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:25.024070   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:25.087368   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:25.087470   19597 retry.go:31] will retry after 209.670244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:25.299453   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:25.362974   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:25.363071   19597 retry.go:31] will retry after 670.513831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:26.035956   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:26.101722   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:34:26.101811   19597 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:34:26.101835   19597 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:26.101850   19597 start.go:128] duration metric: createHost completed in 5.736028758s
	I1025 21:34:26.101914   19597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:34:26.101953   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:26.163062   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:26.163138   19597 retry.go:31] will retry after 168.316559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:26.333793   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:26.396312   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:26.396392   19597 retry.go:31] will retry after 390.412446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:26.789231   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:26.850889   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:26.850991   19597 retry.go:31] will retry after 587.33751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:27.440712   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:27.505949   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:34:27.506040   19597 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:34:27.506058   19597 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:27.506118   19597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:34:27.506160   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:27.566737   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:27.566814   19597 retry.go:31] will retry after 230.78805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:27.798167   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:27.865198   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:27.865280   19597 retry.go:31] will retry after 386.469643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:28.253990   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:28.412086   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	I1025 21:34:28.412222   19597 retry.go:31] will retry after 423.866531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:28.836816   19597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232
	W1025 21:34:28.900386   19597 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232 returned with exit code 1
	W1025 21:34:28.900471   19597 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:34:28.900489   19597 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-213232": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-213232: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	I1025 21:34:28.900501   19597 fix.go:57] fixHost completed within 29.24004925s
	I1025 21:34:28.900508   19597 start.go:83] releasing machines lock for "no-preload-213232", held for 29.240085142s
	W1025 21:34:28.900746   19597 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-213232" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p no-preload-213232" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:34:28.944329   19597 out.go:177] 
	W1025 21:34:28.965288   19597 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-213232 container: docker run --rm --name no-preload-213232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-213232 --entrypoint /usr/bin/test -v no-preload-213232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:34:28.965306   19597 out.go:239] * 
	* 
	W1025 21:34:28.966010   19597 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:34:29.029230   19597 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-213232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "5cc934a1c7286dba887857b0f42792283e67c8ad2d39b77b5e6e0d6a21b97cfb",
	        "Created": "2022-10-26T04:34:20.66411019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (131.375773ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:29.299990   20024 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (61.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-213230" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "0e9988291015c86e59eebb1e91026fbaaa45e5a2ab3f5a89379387a294619e4a",
	        "Created": "2022-10-26T04:34:19.4942893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (111.561906ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:28.801642   20005 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-213230" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-213230 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-213230 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (32.881102ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-213230" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-213230 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "0e9988291015c86e59eebb1e91026fbaaa45e5a2ab3f5a89379387a294619e4a",
	        "Created": "2022-10-26T04:34:19.4942893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (135.576034ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:29.080767   20014 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-213230 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-213230 "sudo crictl images -o json": exit status 80 (210.326628ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-213230 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "0e9988291015c86e59eebb1e91026fbaaa45e5a2ab3f5a89379387a294619e4a",
	        "Created": "2022-10-26T04:34:19.4942893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (118.384776ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:29.479358   20032 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-213232" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "5cc934a1c7286dba887857b0f42792283e67c8ad2d39b77b5e6e0d6a21b97cfb",
	        "Created": "2022-10-26T04:34:20.66411019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (112.614632ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:29.480675   20033 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-213230 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-213230 --alsologtostderr -v=1: exit status 80 (205.726196ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:34:29.533423   20040 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:34:29.533615   20040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:29.533620   20040 out.go:309] Setting ErrFile to fd 2...
	I1025 21:34:29.533624   20040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:29.533747   20040 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:34:29.534059   20040 out.go:303] Setting JSON to false
	I1025 21:34:29.534075   20040 mustload.go:65] Loading cluster: old-k8s-version-213230
	I1025 21:34:29.534385   20040 config.go:180] Loaded profile config "old-k8s-version-213230": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 21:34:29.534732   20040 cli_runner.go:164] Run: docker container inspect old-k8s-version-213230 --format={{.State.Status}}
	W1025 21:34:29.596479   20040 cli_runner.go:211] docker container inspect old-k8s-version-213230 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:29.618753   20040 out.go:177] 
	W1025 21:34:29.639683   20040 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230
	
	W1025 21:34:29.639705   20040 out.go:239] * 
	* 
	W1025 21:34:29.643675   20040 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:34:29.664334   20040 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p old-k8s-version-213230 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "0e9988291015c86e59eebb1e91026fbaaa45e5a2ab3f5a89379387a294619e4a",
	        "Created": "2022-10-26T04:34:19.4942893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (159.738094ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:29.912318   20055 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-213230

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:235: (dbg) docker inspect old-k8s-version-213230:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-213230",
	        "Id": "0e9988291015c86e59eebb1e91026fbaaa45e5a2ab3f5a89379387a294619e4a",
	        "Created": "2022-10-26T04:34:19.4942893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-213230"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-213230 -n old-k8s-version-213230: exit status 7 (113.610837ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:30.099767   20066 status.go:249] status error: host: state: unknown state "old-k8s-version-213230": docker container inspect old-k8s-version-213230 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-213230

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:241: "old-k8s-version-213230" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-213232" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-213232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-213232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (37.276263ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-213232" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-213232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "5cc934a1c7286dba887857b0f42792283e67c8ad2d39b77b5e6e0d6a21b97cfb",
	        "Created": "2022-10-26T04:34:20.66411019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (118.418418ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:29.703170   20047 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-213232 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-213232 "sudo crictl images -o json": exit status 80 (207.703197ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-213232 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.3",
- 	"registry.k8s.io/kube-controller-manager:v1.25.3",
- 	"registry.k8s.io/kube-proxy:v1.25.3",
- 	"registry.k8s.io/kube-scheduler:v1.25.3",
- 	"registry.k8s.io/pause:3.8",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "5cc934a1c7286dba887857b0f42792283e67c8ad2d39b77b5e6e0d6a21b97cfb",
	        "Created": "2022-10-26T04:34:20.66411019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (120.395512ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:30.099415   20065 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-213232 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-213232 --alsologtostderr -v=1: exit status 80 (264.386117ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:34:30.154980   20074 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:34:30.155131   20074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:30.155136   20074 out.go:309] Setting ErrFile to fd 2...
	I1025 21:34:30.155139   20074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:30.155263   20074 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:34:30.155641   20074 out.go:303] Setting JSON to false
	I1025 21:34:30.155661   20074 mustload.go:65] Loading cluster: no-preload-213232
	I1025 21:34:30.156022   20074 config.go:180] Loaded profile config "no-preload-213232": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:34:30.156360   20074 cli_runner.go:164] Run: docker container inspect no-preload-213232 --format={{.State.Status}}
	W1025 21:34:30.221071   20074 cli_runner.go:211] docker container inspect no-preload-213232 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:30.243553   20074 out.go:177] 
	W1025 21:34:30.292737   20074 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232
	
	W1025 21:34:30.292759   20074 out.go:239] * 
	* 
	W1025 21:34:30.296579   20074 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:34:30.336419   20074 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p no-preload-213232 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "5cc934a1c7286dba887857b0f42792283e67c8ad2d39b77b5e6e0d6a21b97cfb",
	        "Created": "2022-10-26T04:34:20.66411019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (115.31522ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:30.546626   20087 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-213232
helpers_test.go:235: (dbg) docker inspect no-preload-213232:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-213232",
	        "Id": "5cc934a1c7286dba887857b0f42792283e67c8ad2d39b77b5e6e0d6a21b97cfb",
	        "Created": "2022-10-26T04:34:20.66411019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-213232"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-213232 -n no-preload-213232: exit status 7 (116.192119ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:34:30.728749   20101 status.go:249] status error: host: state: unknown state "no-preload-213232": docker container inspect no-preload-213232 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-213232

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-213232" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-213431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-213431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (39.667899369s)

                                                
                                                
-- stdout --
	* [embed-certs-213431] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node embed-certs-213431 in cluster embed-certs-213431
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-213431" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:34:31.564077   20144 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:34:31.564444   20144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:31.564456   20144 out.go:309] Setting ErrFile to fd 2...
	I1025 21:34:31.564463   20144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:31.564714   20144 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:34:31.565699   20144 out.go:303] Setting JSON to false
	I1025 21:34:31.581505   20144 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5640,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:34:31.581596   20144 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:34:31.629958   20144 out.go:177] * [embed-certs-213431] minikube v1.27.1 on Darwin 12.6
	I1025 21:34:31.652076   20144 notify.go:220] Checking for updates...
	I1025 21:34:31.673006   20144 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:34:31.715733   20144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:34:31.758056   20144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:34:31.800075   20144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:34:31.841778   20144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:34:31.863876   20144 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:34:31.863972   20144 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:34:31.932937   20144 docker.go:137] docker version: linux-20.10.17
	I1025 21:34:31.933072   20144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:34:32.101380   20144 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:34:32.008682534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:34:32.122933   20144 out.go:177] * Using the docker driver based on user configuration
	I1025 21:34:32.143697   20144 start.go:282] selected driver: docker
	I1025 21:34:32.143710   20144 start.go:808] validating driver "docker" against <nil>
	I1025 21:34:32.143729   20144 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:34:32.145866   20144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:34:32.279709   20144 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:34:32.222657654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:34:32.279849   20144 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:34:32.279998   20144 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:34:32.301883   20144 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:34:32.323398   20144 cni.go:95] Creating CNI manager for ""
	I1025 21:34:32.323424   20144 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:34:32.323437   20144 start_flags.go:317] config:
	{Name:embed-certs-213431 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-213431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:34:32.345620   20144 out.go:177] * Starting control plane node embed-certs-213431 in cluster embed-certs-213431
	I1025 21:34:32.389301   20144 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:34:32.410190   20144 out.go:177] * Pulling base image ...
	I1025 21:34:32.452492   20144 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:34:32.452497   20144 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:34:32.452554   20144 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:34:32.452576   20144 cache.go:57] Caching tarball of preloaded images
	I1025 21:34:32.452864   20144 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:34:32.452891   20144 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:34:32.453962   20144 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/embed-certs-213431/config.json ...
	I1025 21:34:32.454107   20144 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/embed-certs-213431/config.json: {Name:mkbf7d263db5d86dd3409d52db699e373b158094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:34:32.517285   20144 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:34:32.517302   20144 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:34:32.517310   20144 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:34:32.517349   20144 start.go:364] acquiring machines lock for embed-certs-213431: {Name:mke416081ef15e84157f62e109b2319d3307f98a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:34:32.517508   20144 start.go:368] acquired machines lock for "embed-certs-213431" in 148.588µs
	I1025 21:34:32.517536   20144 start.go:93] Provisioning new machine with config: &{Name:embed-certs-213431 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-213431 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:34:32.517642   20144 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:34:32.539262   20144 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:34:32.539455   20144 start.go:159] libmachine.API.Create for "embed-certs-213431" (driver="docker")
	I1025 21:34:32.539480   20144 client.go:168] LocalClient.Create starting
	I1025 21:34:32.539538   20144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:34:32.539566   20144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:32.539582   20144 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:32.539648   20144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:34:32.539682   20144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:32.539694   20144 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:32.540099   20144 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:34:32.603139   20144 cli_runner.go:211] docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:34:32.603256   20144 network_create.go:272] running [docker network inspect embed-certs-213431] to gather additional debugging logs...
	I1025 21:34:32.603272   20144 cli_runner.go:164] Run: docker network inspect embed-certs-213431
	W1025 21:34:32.664404   20144 cli_runner.go:211] docker network inspect embed-certs-213431 returned with exit code 1
	I1025 21:34:32.664427   20144 network_create.go:275] error running [docker network inspect embed-certs-213431]: docker network inspect embed-certs-213431: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-213431
	I1025 21:34:32.664440   20144 network_create.go:277] output of [docker network inspect embed-certs-213431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-213431
	
	** /stderr **
	I1025 21:34:32.664538   20144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:34:32.726729   20144 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00053c800] misses:0}
	I1025 21:34:32.726769   20144 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:32.726783   20144 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:34:32.726863   20144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	W1025 21:34:32.790231   20144 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431 returned with exit code 1
	W1025 21:34:32.790264   20144 network_create.go:107] failed to create docker network embed-certs-213431 192.168.49.0/24, will retry: subnet is taken
	I1025 21:34:32.791321   20144 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800] amended:false}} dirty:map[] misses:0}
	I1025 21:34:32.791582   20144 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:32.791794   20144 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800] amended:true}} dirty:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010] misses:0}
	I1025 21:34:32.791809   20144 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:32.791818   20144 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:34:32.791885   20144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	W1025 21:34:32.853552   20144 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431 returned with exit code 1
	W1025 21:34:32.853604   20144 network_create.go:107] failed to create docker network embed-certs-213431 192.168.58.0/24, will retry: subnet is taken
	I1025 21:34:32.853867   20144 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800] amended:true}} dirty:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010] misses:1}
	I1025 21:34:32.853885   20144 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:32.854089   20144 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800] amended:true}} dirty:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010 192.168.67.0:0xc000b0a8f0] misses:1}
	I1025 21:34:32.854105   20144 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:32.854112   20144 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:34:32.854180   20144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	I1025 21:34:32.967380   20144 network_create.go:99] docker network embed-certs-213431 192.168.67.0/24 created
	I1025 21:34:32.967421   20144 kic.go:106] calculated static IP "192.168.67.2" for the "embed-certs-213431" container
	I1025 21:34:32.967501   20144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:34:33.179138   20144 cli_runner.go:164] Run: docker volume create embed-certs-213431 --label name.minikube.sigs.k8s.io=embed-certs-213431 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:34:33.336055   20144 oci.go:103] Successfully created a docker volume embed-certs-213431
	I1025 21:34:33.336202   20144 cli_runner.go:164] Run: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:34:33.621964   20144 cli_runner.go:211] docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:34:33.622011   20144 client.go:171] LocalClient.Create took 1.082520425s
	I1025 21:34:35.624347   20144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:34:35.624494   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:35.690444   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:35.690544   20144 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:35.969029   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:36.032181   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:36.032279   20144 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:36.574378   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:36.655100   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:36.655193   20144 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:37.311256   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:37.376419   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:34:37.376538   20144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:34:37.376570   20144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:37.376619   20144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:34:37.376670   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:37.437167   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:37.437259   20144 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:37.670765   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:37.733705   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:37.733809   20144 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:38.181013   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:38.245840   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:38.245991   20144 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:38.566347   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:38.628447   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:34:38.628538   20144 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:39.183830   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:34:39.247942   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:34:39.248050   20144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:34:39.248069   20144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:39.248085   20144 start.go:128] duration metric: createHost completed in 6.730416342s
	I1025 21:34:39.248092   20144 start.go:83] releasing machines lock for "embed-certs-213431", held for 6.730555124s
	W1025 21:34:39.248107   20144 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:34:39.248518   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:39.309819   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:39.309865   20144 delete.go:82] Unable to get host status for embed-certs-213431, assuming it has already been deleted: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:34:39.310027   20144 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:34:39.310039   20144 start.go:618] Will try again in 5 seconds ...
	I1025 21:34:44.310878   20144 start.go:364] acquiring machines lock for embed-certs-213431: {Name:mke416081ef15e84157f62e109b2319d3307f98a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:34:44.311003   20144 start.go:368] acquired machines lock for "embed-certs-213431" in 98.607µs
	I1025 21:34:44.311024   20144 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:34:44.311037   20144 fix.go:55] fixHost starting: 
	I1025 21:34:44.311337   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:44.377656   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:44.377697   20144 fix.go:103] recreateIfNeeded on embed-certs-213431: state= err=unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:44.377717   20144 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:34:44.422118   20144 out.go:177] * docker "embed-certs-213431" container is missing, will recreate.
	I1025 21:34:44.443344   20144 delete.go:124] DEMOLISHING embed-certs-213431 ...
	I1025 21:34:44.443554   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:44.504967   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:34:44.505008   20144 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:44.505034   20144 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:44.505375   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:44.568575   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:44.568619   20144 delete.go:82] Unable to get host status for embed-certs-213431, assuming it has already been deleted: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:44.568699   20144 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-213431
	W1025 21:34:44.628672   20144 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-213431 returned with exit code 1
	I1025 21:34:44.628700   20144 kic.go:356] could not find the container embed-certs-213431 to remove it. will try anyways
	I1025 21:34:44.628769   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:44.689622   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:34:44.689661   20144 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:44.689726   20144 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0"
	W1025 21:34:44.750741   20144 cli_runner.go:211] docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:34:44.750772   20144 oci.go:646] error shutdown embed-certs-213431: docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:45.751378   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:45.815444   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:45.815485   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:45.815497   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:45.815526   20144 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:46.218450   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:46.285865   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:46.285910   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:46.285923   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:46.285943   20144 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:47.049779   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:47.112187   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:47.112251   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:47.112264   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:47.112282   20144 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:48.592477   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:48.657073   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:48.657135   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:48.657148   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:48.657166   20144 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:49.864787   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:49.926599   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:49.926654   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:49.926666   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:49.926685   20144 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:52.156237   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:52.222176   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:52.222261   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:52.222274   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:52.222297   20144 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:55.330219   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:34:55.397085   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:55.397124   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:34:55.397138   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:34:55.397158   20144 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:00.915608   20144 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:00.981653   20144 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:00.981708   20144 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:00.981720   20144 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:00.981761   20144 oci.go:88] couldn't shut down embed-certs-213431 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	 
	I1025 21:35:00.981854   20144 cli_runner.go:164] Run: docker rm -f -v embed-certs-213431
	I1025 21:35:01.044780   20144 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-213431
	W1025 21:35:01.104364   20144 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-213431 returned with exit code 1
	I1025 21:35:01.104458   20144 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:01.164849   20144 cli_runner.go:164] Run: docker network rm embed-certs-213431
	W1025 21:35:01.268477   20144 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:35:01.268495   20144 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:35:02.268559   20144 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:35:02.290399   20144 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:35:02.290502   20144 start.go:159] libmachine.API.Create for "embed-certs-213431" (driver="docker")
	I1025 21:35:02.290523   20144 client.go:168] LocalClient.Create starting
	I1025 21:35:02.290606   20144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:35:02.290643   20144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:02.290655   20144 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:02.290699   20144 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:35:02.290724   20144 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:02.290731   20144 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:02.311969   20144 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:35:02.376158   20144 cli_runner.go:211] docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:35:02.376248   20144 network_create.go:272] running [docker network inspect embed-certs-213431] to gather additional debugging logs...
	I1025 21:35:02.376266   20144 cli_runner.go:164] Run: docker network inspect embed-certs-213431
	W1025 21:35:02.437431   20144 cli_runner.go:211] docker network inspect embed-certs-213431 returned with exit code 1
	I1025 21:35:02.437450   20144 network_create.go:275] error running [docker network inspect embed-certs-213431]: docker network inspect embed-certs-213431: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-213431
	I1025 21:35:02.437469   20144 network_create.go:277] output of [docker network inspect embed-certs-213431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-213431
	
	** /stderr **
	I1025 21:35:02.437539   20144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:02.498521   20144 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800] amended:true}} dirty:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010 192.168.67.0:0xc000b0a8f0] misses:1}
	I1025 21:35:02.498560   20144 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:02.498762   20144 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800] amended:true}} dirty:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010 192.168.67.0:0xc000b0a8f0] misses:2}
	I1025 21:35:02.498771   20144 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:02.498979   20144 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010 192.168.67.0:0xc000b0a8f0] amended:false}} dirty:map[] misses:0}
	I1025 21:35:02.498987   20144 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:02.499188   20144 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010 192.168.67.0:0xc000b0a8f0] amended:true}} dirty:map[192.168.49.0:0xc00053c800 192.168.58.0:0xc000140010 192.168.67.0:0xc000b0a8f0 192.168.76.0:0xc0006962e8] misses:0}
	I1025 21:35:02.499202   20144 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:02.499210   20144 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:35:02.499286   20144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	I1025 21:35:02.589458   20144 network_create.go:99] docker network embed-certs-213431 192.168.76.0/24 created
	I1025 21:35:02.589485   20144 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-213431" container
	I1025 21:35:02.589598   20144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:35:02.651479   20144 cli_runner.go:164] Run: docker volume create embed-certs-213431 --label name.minikube.sigs.k8s.io=embed-certs-213431 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:35:02.712311   20144 oci.go:103] Successfully created a docker volume embed-certs-213431
	I1025 21:35:02.712415   20144 cli_runner.go:164] Run: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:35:02.848036   20144 cli_runner.go:211] docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:35:02.848088   20144 client.go:171] LocalClient.Create took 557.558746ms
	I1025 21:35:04.849859   20144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:04.849975   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:04.914380   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:04.914472   20144 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:05.115124   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:05.181165   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:05.181248   20144 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:05.625754   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:05.692626   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:05.692716   20144 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:06.099244   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:06.163106   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:06.163199   20144 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:06.757519   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:06.824832   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:06.824928   20144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:06.824948   20144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:06.825002   20144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:06.825051   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:06.885282   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:06.885392   20144 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:07.155374   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:07.220957   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:07.221066   20144 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:07.734370   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:07.799879   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:07.799970   20144 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:08.248447   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:08.315289   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:08.315405   20144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:08.315424   20144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:08.315433   20144 start.go:128] duration metric: createHost completed in 6.046839202s
	I1025 21:35:08.315485   20144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:08.315528   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:08.376280   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:08.376374   20144 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:08.689843   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:08.755408   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:08.755499   20144 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:09.021905   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:09.089993   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:09.090080   20144 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:09.858706   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:09.921534   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:09.921619   20144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:09.921660   20144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:09.921714   20144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:09.921762   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:09.981768   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:09.981871   20144 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:10.240103   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:10.302453   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:10.302559   20144 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:10.501412   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:10.567150   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:10.567234   20144 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:10.937789   20144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:11.003232   20144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:11.003321   20144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:11.003345   20144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:11.003358   20144 fix.go:57] fixHost completed within 26.692236278s
	I1025 21:35:11.003365   20144 start.go:83] releasing machines lock for "embed-certs-213431", held for 26.692267444s
	W1025 21:35:11.003506   20144 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-213431" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-213431" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:35:11.046147   20144 out.go:177] 
	W1025 21:35:11.068144   20144 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:35:11.068174   20144 out.go:239] * 
	* 
	W1025 21:35:11.069517   20144 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:35:11.133027   20144 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-213431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "cf8b1cf49cc7476892dc583dd4bc1e15500a2ee62cc5a58e1f33311cd866facb",
	        "Created": "2022-10-26T04:35:02.573646163Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (111.123445ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:11.342377   20491 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (39.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-213432 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-213432 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (39.531382769s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-213432] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node default-k8s-diff-port-213432 in cluster default-k8s-diff-port-213432
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-diff-port-213432" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:34:32.980502   20206 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:34:32.980670   20206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:32.980676   20206 out.go:309] Setting ErrFile to fd 2...
	I1025 21:34:32.980679   20206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:32.980793   20206 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:34:32.981290   20206 out.go:303] Setting JSON to false
	I1025 21:34:32.996940   20206 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5641,"bootTime":1666753231,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:34:32.997049   20206 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:34:33.018788   20206 out.go:177] * [default-k8s-diff-port-213432] minikube v1.27.1 on Darwin 12.6
	I1025 21:34:33.060927   20206 notify.go:220] Checking for updates...
	I1025 21:34:33.082821   20206 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:34:33.124909   20206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:34:33.166871   20206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:34:33.208825   20206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:34:33.229734   20206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:34:33.251998   20206 config.go:180] Loaded profile config "embed-certs-213431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:34:33.252201   20206 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:34:33.252300   20206 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:34:33.342989   20206 docker.go:137] docker version: linux-20.10.17
	I1025 21:34:33.343167   20206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:34:33.471863   20206 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:41 SystemTime:2022-10-26 04:34:33.418821663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:34:33.516787   20206 out.go:177] * Using the docker driver based on user configuration
	I1025 21:34:33.542727   20206 start.go:282] selected driver: docker
	I1025 21:34:33.542759   20206 start.go:808] validating driver "docker" against <nil>
	I1025 21:34:33.542785   20206 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:34:33.546364   20206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:34:33.680444   20206 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:44 SystemTime:2022-10-26 04:34:33.624678696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:34:33.680577   20206 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 21:34:33.680711   20206 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:34:33.703641   20206 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:34:33.724320   20206 cni.go:95] Creating CNI manager for ""
	I1025 21:34:33.724349   20206 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:34:33.724369   20206 start_flags.go:317] config:
	{Name:default-k8s-diff-port-213432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-213432 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:34:33.746346   20206 out.go:177] * Starting control plane node default-k8s-diff-port-213432 in cluster default-k8s-diff-port-213432
	I1025 21:34:33.790312   20206 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:34:33.812203   20206 out.go:177] * Pulling base image ...
	I1025 21:34:33.854405   20206 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:34:33.854417   20206 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:34:33.854488   20206 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:34:33.854505   20206 cache.go:57] Caching tarball of preloaded images
	I1025 21:34:33.854688   20206 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:34:33.854710   20206 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:34:33.855698   20206 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/default-k8s-diff-port-213432/config.json ...
	I1025 21:34:33.855823   20206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/default-k8s-diff-port-213432/config.json: {Name:mk8a8932110cb6c3ee190a623e32fa300d9089cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:34:33.918306   20206 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:34:33.918322   20206 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:34:33.918337   20206 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:34:33.918371   20206 start.go:364] acquiring machines lock for default-k8s-diff-port-213432: {Name:mkfae46218f26a8df96ce623e68a2e2d4ae3bab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:34:33.918511   20206 start.go:368] acquired machines lock for "default-k8s-diff-port-213432" in 129.54µs
	I1025 21:34:33.918536   20206 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-213432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-213432 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:34:33.918590   20206 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:34:33.963247   20206 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:34:33.963650   20206 start.go:159] libmachine.API.Create for "default-k8s-diff-port-213432" (driver="docker")
	I1025 21:34:33.963686   20206 client.go:168] LocalClient.Create starting
	I1025 21:34:33.963872   20206 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:34:33.963954   20206 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:33.963986   20206 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:33.964040   20206 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:34:33.964085   20206 main.go:134] libmachine: Decoding PEM data...
	I1025 21:34:33.964101   20206 main.go:134] libmachine: Parsing certificate...
	I1025 21:34:33.964975   20206 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:34:34.026807   20206 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:34:34.026911   20206 network_create.go:272] running [docker network inspect default-k8s-diff-port-213432] to gather additional debugging logs...
	I1025 21:34:34.026925   20206 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432
	W1025 21:34:34.087384   20206 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:34.087403   20206 network_create.go:275] error running [docker network inspect default-k8s-diff-port-213432]: docker network inspect default-k8s-diff-port-213432: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-213432
	I1025 21:34:34.087416   20206 network_create.go:277] output of [docker network inspect default-k8s-diff-port-213432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-213432
	
	** /stderr **
	I1025 21:34:34.087482   20206 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:34:34.148876   20206 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000012cd8] misses:0}
	I1025 21:34:34.148911   20206 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.148924   20206 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:34:34.148995   20206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	W1025 21:34:34.209042   20206 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:34:34.209075   20206 network_create.go:107] failed to create docker network default-k8s-diff-port-213432 192.168.49.0/24, will retry: subnet is taken
	I1025 21:34:34.209320   20206 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:false}} dirty:map[] misses:0}
	I1025 21:34:34.209338   20206 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.209538   20206 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398] misses:0}
	I1025 21:34:34.209554   20206 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.209564   20206 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:34:34.209982   20206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	W1025 21:34:34.271920   20206 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:34:34.271950   20206 network_create.go:107] failed to create docker network default-k8s-diff-port-213432 192.168.58.0/24, will retry: subnet is taken
	I1025 21:34:34.272180   20206 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398] misses:1}
	I1025 21:34:34.272196   20206 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.272395   20206 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208] misses:1}
	I1025 21:34:34.272407   20206 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.272414   20206 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:34:34.272476   20206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	W1025 21:34:34.332899   20206 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:34:34.332943   20206 network_create.go:107] failed to create docker network default-k8s-diff-port-213432 192.168.67.0/24, will retry: subnet is taken
	I1025 21:34:34.333193   20206 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208] misses:2}
	I1025 21:34:34.333212   20206 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.333421   20206 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008] misses:2}
	I1025 21:34:34.333436   20206 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:34:34.333443   20206 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:34:34.333518   20206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	I1025 21:34:34.425465   20206 network_create.go:99] docker network default-k8s-diff-port-213432 192.168.76.0/24 created
	I1025 21:34:34.425496   20206 kic.go:106] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-213432" container
	I1025 21:34:34.425582   20206 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:34:34.487601   20206 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-213432 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:34:34.548960   20206 oci.go:103] Successfully created a docker volume default-k8s-diff-port-213432
	I1025 21:34:34.549090   20206 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:34:34.758972   20206 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:34:34.759017   20206 client.go:171] LocalClient.Create took 795.320351ms
	I1025 21:34:36.761387   20206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:34:36.761504   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:36.824318   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:36.824421   20206 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:37.101083   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:37.166489   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:37.166582   20206 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:37.706915   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:37.768255   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:37.768353   20206 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:38.425887   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:38.493496   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:34:38.493594   20206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:34:38.493613   20206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:38.493668   20206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:34:38.493724   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:38.554141   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:38.554231   20206 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:38.787752   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:38.851510   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:38.851608   20206 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:39.296854   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:39.358195   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:39.358285   20206 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:39.678851   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:39.741880   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:39.741964   20206 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:40.298271   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:34:40.363615   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:34:40.363699   20206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:34:40.363722   20206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:40.363741   20206 start.go:128] duration metric: createHost completed in 6.445127022s
	I1025 21:34:40.363749   20206 start.go:83] releasing machines lock for "default-k8s-diff-port-213432", held for 6.445209912s
	W1025 21:34:40.363763   20206 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:34:40.364161   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:40.425106   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:40.425177   20206 delete.go:82] Unable to get host status for default-k8s-diff-port-213432, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:34:40.425343   20206 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:34:40.425354   20206 start.go:618] Will try again in 5 seconds ...
	I1025 21:34:45.426104   20206 start.go:364] acquiring machines lock for default-k8s-diff-port-213432: {Name:mkfae46218f26a8df96ce623e68a2e2d4ae3bab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:34:45.426349   20206 start.go:368] acquired machines lock for "default-k8s-diff-port-213432" in 206.823µs
	I1025 21:34:45.426381   20206 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:34:45.426395   20206 fix.go:55] fixHost starting: 
	I1025 21:34:45.426852   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:45.493531   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:45.493568   20206 fix.go:103] recreateIfNeeded on default-k8s-diff-port-213432: state= err=unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:45.493595   20206 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:34:45.515513   20206 out.go:177] * docker "default-k8s-diff-port-213432" container is missing, will recreate.
	I1025 21:34:45.558059   20206 delete.go:124] DEMOLISHING default-k8s-diff-port-213432 ...
	I1025 21:34:45.558217   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:45.628306   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:34:45.628365   20206 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:45.628386   20206 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:45.628792   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:45.689047   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:45.689106   20206 delete.go:82] Unable to get host status for default-k8s-diff-port-213432, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:45.689213   20206 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-213432
	W1025 21:34:45.750216   20206 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:34:45.750242   20206 kic.go:356] could not find the container default-k8s-diff-port-213432 to remove it. will try anyways
	I1025 21:34:45.750351   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:45.815474   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:34:45.815509   20206 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:45.815583   20206 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0"
	W1025 21:34:45.876290   20206 cli_runner.go:211] docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:34:45.876314   20206 oci.go:646] error shutdown default-k8s-diff-port-213432: docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:46.878759   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:46.942258   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:46.942301   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:46.942323   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:46.942343   20206 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:47.345297   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:47.410712   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:47.410759   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:47.410774   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:47.410794   20206 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:48.174498   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:48.259134   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:48.259183   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:48.259198   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:48.259221   20206 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:49.739474   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:49.804501   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:49.804553   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:49.804576   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:49.804598   20206 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:51.012317   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:51.078500   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:51.078541   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:51.078560   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:51.078583   20206 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:53.309589   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:53.373147   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:53.373190   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:53.373200   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:53.373220   20206 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:56.479683   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:34:56.542354   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:34:56.542397   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:34:56.542444   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:34:56.542465   20206 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:02.060868   20206 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:02.123152   20206 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:02.123192   20206 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:02.123208   20206 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:02.123238   20206 oci.go:88] couldn't shut down default-k8s-diff-port-213432 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	 
	I1025 21:35:02.123308   20206 cli_runner.go:164] Run: docker rm -f -v default-k8s-diff-port-213432
	I1025 21:35:02.187242   20206 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-213432
	W1025 21:35:02.247758   20206 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:02.247865   20206 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:02.320215   20206 cli_runner.go:164] Run: docker network rm default-k8s-diff-port-213432
	W1025 21:35:02.423329   20206 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:35:02.423347   20206 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:35:03.423511   20206 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:35:03.445501   20206 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:35:03.445683   20206 start.go:159] libmachine.API.Create for "default-k8s-diff-port-213432" (driver="docker")
	I1025 21:35:03.445718   20206 client.go:168] LocalClient.Create starting
	I1025 21:35:03.445872   20206 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:35:03.445936   20206 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:03.445959   20206 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:03.446056   20206 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:35:03.446129   20206 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:03.446145   20206 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:03.446848   20206 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:35:03.512863   20206 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:35:03.512954   20206 network_create.go:272] running [docker network inspect default-k8s-diff-port-213432] to gather additional debugging logs...
	I1025 21:35:03.512976   20206 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432
	W1025 21:35:03.575098   20206 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:03.575116   20206 network_create.go:275] error running [docker network inspect default-k8s-diff-port-213432]: docker network inspect default-k8s-diff-port-213432: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-213432
	I1025 21:35:03.575130   20206 network_create.go:277] output of [docker network inspect default-k8s-diff-port-213432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-213432
	
	** /stderr **
	I1025 21:35:03.575218   20206 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:03.635865   20206 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008] misses:2}
	I1025 21:35:03.635905   20206 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:03.636128   20206 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008] misses:3}
	I1025 21:35:03.636138   20206 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:03.636335   20206 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008] amended:false}} dirty:map[] misses:0}
	I1025 21:35:03.636343   20206 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:03.636541   20206 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008] amended:false}} dirty:map[] misses:0}
	I1025 21:35:03.636549   20206 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:03.636738   20206 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008] amended:true}} dirty:map[192.168.49.0:0xc000012cd8 192.168.58.0:0xc000b9f398 192.168.67.0:0xc000596208 192.168.76.0:0xc000012008 192.168.85.0:0xc0007d2268] misses:0}
	I1025 21:35:03.636753   20206 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:03.636763   20206 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:35:03.636834   20206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	I1025 21:35:03.726675   20206 network_create.go:99] docker network default-k8s-diff-port-213432 192.168.85.0/24 created
	I1025 21:35:03.726722   20206 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-213432" container
	I1025 21:35:03.726816   20206 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:35:03.789234   20206 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-213432 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:35:03.851816   20206 oci.go:103] Successfully created a docker volume default-k8s-diff-port-213432
	I1025 21:35:03.851919   20206 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:35:03.988125   20206 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:35:03.988168   20206 client.go:171] LocalClient.Create took 542.441126ms
	I1025 21:35:05.988528   20206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:05.988652   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:06.051269   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:06.051355   20206 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:06.251959   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:06.313793   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:06.313882   20206 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:06.757466   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:06.824833   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:06.824921   20206 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:07.229630   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:07.291362   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:07.291458   20206 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:07.887069   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:07.950562   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:07.950655   20206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:07.950673   20206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:07.950719   20206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:07.950788   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:08.013234   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:08.013314   20206 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:08.283225   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:08.346397   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:08.346483   20206 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:08.859125   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:08.920942   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:08.921025   20206 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:09.369550   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:09.432267   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:09.432366   20206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:09.432380   20206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:09.432395   20206 start.go:128] duration metric: createHost completed in 6.00884347s
	I1025 21:35:09.432478   20206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:09.432567   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:09.493046   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:09.493132   20206 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:09.807390   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:09.872824   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:09.872930   20206 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:10.140402   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:10.204761   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:10.204839   20206 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:10.973507   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:11.146687   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:11.146839   20206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:11.146870   20206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:11.146967   20206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:11.147054   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:11.213439   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:11.213536   20206 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:11.471603   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:11.534487   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:11.534568   20206 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:11.733903   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:11.833673   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:11.833792   20206 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:12.205633   20206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:12.280893   20206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:12.280993   20206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:12.281023   20206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:12.281042   20206 fix.go:57] fixHost completed within 26.854559728s
	I1025 21:35:12.281049   20206 start.go:83] releasing machines lock for "default-k8s-diff-port-213432", held for 26.8546018s
	W1025 21:35:12.281196   20206 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-213432" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-213432" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:35:12.323646   20206 out.go:177] 
	W1025 21:35:12.345135   20206 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:35:12.345181   20206 out.go:239] * 
	* 
	W1025 21:35:12.346408   20206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:35:12.436731   20206 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-213432 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
E1025 21:35:12.502750    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "0cd271c91eee03a9946ccb0618b4f9ff0bbbb7f45a931c6b753e2f1b34610592",
	        "Created": "2022-10-26T04:35:03.712111532Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (111.666872ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:12.648760   20533 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-213431 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-213431 create -f testdata/busybox.yaml: exit status 1 (33.160204ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-213431" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-213431 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "cf8b1cf49cc7476892dc583dd4bc1e15500a2ee62cc5a58e1f33311cd866facb",
	        "Created": "2022-10-26T04:35:02.573646163Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (114.091677ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:11.553706   20498 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "cf8b1cf49cc7476892dc583dd4bc1e15500a2ee62cc5a58e1f33311cd866facb",
	        "Created": "2022-10-26T04:35:02.573646163Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (112.170274ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:11.730580   20508 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-213431 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-213431 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-213431 describe deploy/metrics-server -n kube-system: exit status 1 (33.361376ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-213431" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-213431 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "cf8b1cf49cc7476892dc583dd4bc1e15500a2ee62cc5a58e1f33311cd866facb",
	        "Created": "2022-10-26T04:35:02.573646163Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (114.759367ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:12.175049   20521 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-213431 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p embed-certs-213431 --alsologtostderr -v=3: exit status 82 (14.825261432s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-213431"  ...
	* Stopping node "embed-certs-213431"  ...
	* Stopping node "embed-certs-213431"  ...
	* Stopping node "embed-certs-213431"  ...
	* Stopping node "embed-certs-213431"  ...
	* Stopping node "embed-certs-213431"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:35:12.226039   20525 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:35:12.226220   20525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:12.226225   20525 out.go:309] Setting ErrFile to fd 2...
	I1025 21:35:12.226229   20525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:12.226342   20525 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:35:12.226649   20525 out.go:303] Setting JSON to false
	I1025 21:35:12.226798   20525 mustload.go:65] Loading cluster: embed-certs-213431
	I1025 21:35:12.227067   20525 config.go:180] Loaded profile config "embed-certs-213431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:35:12.227136   20525 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/embed-certs-213431/config.json ...
	I1025 21:35:12.227403   20525 mustload.go:65] Loading cluster: embed-certs-213431
	I1025 21:35:12.227502   20525 config.go:180] Loaded profile config "embed-certs-213431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:35:12.227537   20525 stop.go:39] StopHost: embed-certs-213431
	I1025 21:35:12.249962   20525 out.go:177] * Stopping node "embed-certs-213431"  ...
	I1025 21:35:12.271088   20525 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:12.460577   20525 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:12.460650   20525 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:35:12.460672   20525 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:12.460708   20525 retry.go:31] will retry after 1.104660288s: docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:13.566249   20525 stop.go:39] StopHost: embed-certs-213431
	I1025 21:35:13.588363   20525 out.go:177] * Stopping node "embed-certs-213431"  ...
	I1025 21:35:13.609468   20525 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:13.682088   20525 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:13.682133   20525 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:35:13.682168   20525 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:13.682188   20525 retry.go:31] will retry after 2.160763633s: docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:15.845078   20525 stop.go:39] StopHost: embed-certs-213431
	I1025 21:35:15.889224   20525 out.go:177] * Stopping node "embed-certs-213431"  ...
	I1025 21:35:15.910671   20525 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:15.974704   20525 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:15.974743   20525 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:35:15.974755   20525 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:15.974771   20525 retry.go:31] will retry after 2.62026012s: docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:18.595644   20525 stop.go:39] StopHost: embed-certs-213431
	I1025 21:35:18.655377   20525 out.go:177] * Stopping node "embed-certs-213431"  ...
	I1025 21:35:18.678250   20525 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:18.741547   20525 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:18.741588   20525 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:35:18.741601   20525 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:18.741616   20525 retry.go:31] will retry after 3.164785382s: docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:21.908719   20525 stop.go:39] StopHost: embed-certs-213431
	I1025 21:35:21.931079   20525 out.go:177] * Stopping node "embed-certs-213431"  ...
	I1025 21:35:21.974813   20525 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:22.037483   20525 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:22.037526   20525 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:35:22.037550   20525 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:22.037566   20525 retry.go:31] will retry after 4.680977329s: docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:26.720905   20525 stop.go:39] StopHost: embed-certs-213431
	I1025 21:35:26.743333   20525 out.go:177] * Stopping node "embed-certs-213431"  ...
	I1025 21:35:26.787257   20525 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:26.850645   20525 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:26.850685   20525 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	W1025 21:35:26.850700   20525 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:26.872031   20525 out.go:177] 
	W1025 21:35:26.893249   20525 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-213431 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-213431 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:26.893281   20525 out.go:239] * 
	* 
	W1025 21:35:26.897221   20525 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:35:26.958132   20525 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p embed-certs-213431 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "cf8b1cf49cc7476892dc583dd4bc1e15500a2ee62cc5a58e1f33311cd866facb",
	        "Created": "2022-10-26T04:35:02.573646163Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (113.350283ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:27.178779   20597 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (15.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-213432 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-213432 create -f testdata/busybox.yaml: exit status 1 (33.648306ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-213432" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-213432 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "0cd271c91eee03a9946ccb0618b4f9ff0bbbb7f45a931c6b753e2f1b34610592",
	        "Created": "2022-10-26T04:35:03.712111532Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (111.623976ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:12.858905   20540 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "0cd271c91eee03a9946ccb0618b4f9ff0bbbb7f45a931c6b753e2f1b34610592",
	        "Created": "2022-10-26T04:35:03.712111532Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (111.877715ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:13.035465   20546 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-213432 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1025 21:35:13.041735    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-213432 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-213432 describe deploy/metrics-server -n kube-system: exit status 1 (33.283413ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-213432" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-213432 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "0cd271c91eee03a9946ccb0618b4f9ff0bbbb7f45a931c6b753e2f1b34610592",
	        "Created": "2022-10-26T04:35:03.712111532Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (111.384008ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:13.537454   20557 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-213432 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-213432 --alsologtostderr -v=3: exit status 82 (14.688105576s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-213432"  ...
	* Stopping node "default-k8s-diff-port-213432"  ...
	* Stopping node "default-k8s-diff-port-213432"  ...
	* Stopping node "default-k8s-diff-port-213432"  ...
	* Stopping node "default-k8s-diff-port-213432"  ...
	* Stopping node "default-k8s-diff-port-213432"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:35:13.587747   20561 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:35:13.588383   20561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:13.588389   20561 out.go:309] Setting ErrFile to fd 2...
	I1025 21:35:13.588394   20561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:13.588513   20561 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:35:13.609491   20561 out.go:303] Setting JSON to false
	I1025 21:35:13.609806   20561 mustload.go:65] Loading cluster: default-k8s-diff-port-213432
	I1025 21:35:13.610423   20561 config.go:180] Loaded profile config "default-k8s-diff-port-213432": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:35:13.610552   20561 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/default-k8s-diff-port-213432/config.json ...
	I1025 21:35:13.611119   20561 mustload.go:65] Loading cluster: default-k8s-diff-port-213432
	I1025 21:35:13.611319   20561 config.go:180] Loaded profile config "default-k8s-diff-port-213432": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:35:13.611363   20561 stop.go:39] StopHost: default-k8s-diff-port-213432
	I1025 21:35:13.632949   20561 out.go:177] * Stopping node "default-k8s-diff-port-213432"  ...
	I1025 21:35:13.675159   20561 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:13.736404   20561 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:13.736465   20561 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:35:13.736500   20561 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:13.736523   20561 retry.go:31] will retry after 1.104660288s: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:14.841391   20561 stop.go:39] StopHost: default-k8s-diff-port-213432
	I1025 21:35:14.863750   20561 out.go:177] * Stopping node "default-k8s-diff-port-213432"  ...
	I1025 21:35:14.885800   20561 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:14.948966   20561 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:14.949012   20561 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:35:14.949028   20561 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:14.949048   20561 retry.go:31] will retry after 2.160763633s: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:17.112016   20561 stop.go:39] StopHost: default-k8s-diff-port-213432
	I1025 21:35:17.134574   20561 out.go:177] * Stopping node "default-k8s-diff-port-213432"  ...
	I1025 21:35:17.156366   20561 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:17.220922   20561 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:17.220957   20561 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:35:17.220968   20561 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:17.220982   20561 retry.go:31] will retry after 2.62026012s: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:19.841943   20561 stop.go:39] StopHost: default-k8s-diff-port-213432
	I1025 21:35:19.863505   20561 out.go:177] * Stopping node "default-k8s-diff-port-213432"  ...
	I1025 21:35:19.885350   20561 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:19.948626   20561 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:19.948662   20561 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:35:19.948675   20561 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:19.948691   20561 retry.go:31] will retry after 3.164785382s: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:23.115306   20561 stop.go:39] StopHost: default-k8s-diff-port-213432
	I1025 21:35:23.137685   20561 out.go:177] * Stopping node "default-k8s-diff-port-213432"  ...
	I1025 21:35:23.180660   20561 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:23.244038   20561 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:23.244089   20561 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:35:23.244102   20561 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:23.244122   20561 retry.go:31] will retry after 4.680977329s: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:27.927276   20561 stop.go:39] StopHost: default-k8s-diff-port-213432
	I1025 21:35:27.965038   20561 out.go:177] * Stopping node "default-k8s-diff-port-213432"  ...
	I1025 21:35:28.009215   20561 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:28.076509   20561 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:28.076546   20561 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	W1025 21:35:28.076557   20561 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:28.097972   20561 out.go:177] 
	W1025 21:35:28.119312   20561 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-diff-port-213432 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-diff-port-213432 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:28.119342   20561 out.go:239] * 
	* 
	W1025 21:35:28.123390   20561 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:35:28.183147   20561 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p default-k8s-diff-port-213432 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "0cd271c91eee03a9946ccb0618b4f9ff0bbbb7f45a931c6b753e2f1b34610592",
	        "Created": "2022-10-26T04:35:03.712111532Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (114.342696ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:28.427407   20628 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (113.179634ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:27.292118   20601 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-213431 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "cf8b1cf49cc7476892dc583dd4bc1e15500a2ee62cc5a58e1f33311cd866facb",
	        "Created": "2022-10-26T04:35:02.573646163Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (111.655511ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:27.723244   20611 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-213431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-213431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (1m2.039722707s)

                                                
                                                
-- stdout --
	* [embed-certs-213431] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-213431 in cluster embed-certs-213431
	* Pulling base image ...
	* docker "embed-certs-213431" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-213431" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:35:27.772749   20615 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:35:27.772913   20615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:27.772918   20615 out.go:309] Setting ErrFile to fd 2...
	I1025 21:35:27.772922   20615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:27.773025   20615 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:35:27.773477   20615 out.go:303] Setting JSON to false
	I1025 21:35:27.789012   20615 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5696,"bootTime":1666753231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:35:27.789089   20615 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:35:27.811400   20615 out.go:177] * [embed-certs-213431] minikube v1.27.1 on Darwin 12.6
	I1025 21:35:27.855115   20615 notify.go:220] Checking for updates...
	I1025 21:35:27.877032   20615 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:35:27.899005   20615 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:35:27.921048   20615 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:35:27.965038   20615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:35:28.009240   20615 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:35:28.031210   20615 config.go:180] Loaded profile config "embed-certs-213431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:35:28.031558   20615 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:35:28.196825   20615 docker.go:137] docker version: linux-20.10.17
	I1025 21:35:28.197056   20615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:35:28.377910   20615 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:35:28.317237824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:35:28.399706   20615 out.go:177] * Using the docker driver based on existing profile
	I1025 21:35:28.420363   20615 start.go:282] selected driver: docker
	I1025 21:35:28.420376   20615 start.go:808] validating driver "docker" against &{Name:embed-certs-213431 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-213431 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:35:28.420459   20615 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:35:28.422683   20615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:35:28.554260   20615 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:35:28.495453548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:35:28.554400   20615 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:35:28.554420   20615 cni.go:95] Creating CNI manager for ""
	I1025 21:35:28.554432   20615 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:35:28.554452   20615 start_flags.go:317] config:
	{Name:embed-certs-213431 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-213431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:35:28.596753   20615 out.go:177] * Starting control plane node embed-certs-213431 in cluster embed-certs-213431
	I1025 21:35:28.618063   20615 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:35:28.638838   20615 out.go:177] * Pulling base image ...
	I1025 21:35:28.681883   20615 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:35:28.681892   20615 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:35:28.681962   20615 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:35:28.681977   20615 cache.go:57] Caching tarball of preloaded images
	I1025 21:35:28.682172   20615 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:35:28.682198   20615 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:35:28.683168   20615 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/embed-certs-213431/config.json ...
	I1025 21:35:28.745208   20615 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:35:28.745246   20615 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:35:28.745285   20615 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:35:28.745383   20615 start.go:364] acquiring machines lock for embed-certs-213431: {Name:mke416081ef15e84157f62e109b2319d3307f98a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:28.745483   20615 start.go:368] acquired machines lock for "embed-certs-213431" in 61.736µs
	I1025 21:35:28.745502   20615 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:35:28.745511   20615 fix.go:55] fixHost starting: 
	I1025 21:35:28.745763   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:28.877068   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:28.877141   20615 fix.go:103] recreateIfNeeded on embed-certs-213431: state= err=unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:28.877188   20615 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:35:28.898831   20615 out.go:177] * docker "embed-certs-213431" container is missing, will recreate.
	I1025 21:35:28.919720   20615 delete.go:124] DEMOLISHING embed-certs-213431 ...
	I1025 21:35:28.919866   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:28.981671   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:28.981708   20615 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:28.981721   20615 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:28.982087   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:29.044439   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:29.044479   20615 delete.go:82] Unable to get host status for embed-certs-213431, assuming it has already been deleted: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:29.044563   20615 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-213431
	W1025 21:35:29.106078   20615 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-213431 returned with exit code 1
	I1025 21:35:29.106103   20615 kic.go:356] could not find the container embed-certs-213431 to remove it. will try anyways
	I1025 21:35:29.106177   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:29.168677   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:29.168747   20615 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:29.168825   20615 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0"
	W1025 21:35:29.346328   20615 cli_runner.go:211] docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:35:29.346368   20615 oci.go:646] error shutdown embed-certs-213431: docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:30.346664   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:30.410876   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:30.410924   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:30.410936   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:30.410967   20615 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:30.965298   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:31.029284   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:31.029336   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:31.029351   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:31.029371   20615 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:32.109895   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:32.171438   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:32.171483   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:32.171495   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:32.171514   20615 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:33.481902   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:33.545378   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:33.545415   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:33.545427   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:33.545446   20615 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:35.128402   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:35.194589   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:35.194635   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:35.194645   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:35.194667   20615 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:37.536580   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:37.603456   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:37.603496   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:37.603509   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:37.603527   20615 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:42.112200   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:42.179203   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:42.179248   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:42.179259   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:42.179301   20615 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:45.401680   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:45.466626   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:45.466698   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:45.466715   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:35:45.466743   20615 oci.go:88] couldn't shut down embed-certs-213431 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	 
	I1025 21:35:45.466819   20615 cli_runner.go:164] Run: docker rm -f -v embed-certs-213431
	I1025 21:35:45.531125   20615 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-213431
	W1025 21:35:45.592703   20615 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-213431 returned with exit code 1
	I1025 21:35:45.592861   20615 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:45.654226   20615 cli_runner.go:164] Run: docker network rm embed-certs-213431
	W1025 21:35:45.767580   20615 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:35:45.767598   20615 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:35:46.768081   20615 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:35:46.790401   20615 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:35:46.790573   20615 start.go:159] libmachine.API.Create for "embed-certs-213431" (driver="docker")
	I1025 21:35:46.790629   20615 client.go:168] LocalClient.Create starting
	I1025 21:35:46.790821   20615 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:35:46.790924   20615 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:46.790963   20615 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:46.791069   20615 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:35:46.791123   20615 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:46.791144   20615 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:46.812542   20615 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:35:46.875277   20615 cli_runner.go:211] docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:35:46.875360   20615 network_create.go:272] running [docker network inspect embed-certs-213431] to gather additional debugging logs...
	I1025 21:35:46.875374   20615 cli_runner.go:164] Run: docker network inspect embed-certs-213431
	W1025 21:35:46.935672   20615 cli_runner.go:211] docker network inspect embed-certs-213431 returned with exit code 1
	I1025 21:35:46.935701   20615 network_create.go:275] error running [docker network inspect embed-certs-213431]: docker network inspect embed-certs-213431: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-213431
	I1025 21:35:46.935716   20615 network_create.go:277] output of [docker network inspect embed-certs-213431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-213431
	
	** /stderr **
	I1025 21:35:46.935778   20615 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:46.997603   20615 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005b4230] misses:0}
	I1025 21:35:46.997641   20615 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:46.997653   20615 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:35:46.997722   20615 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	W1025 21:35:47.060146   20615 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431 returned with exit code 1
	W1025 21:35:47.060181   20615 network_create.go:107] failed to create docker network embed-certs-213431 192.168.49.0/24, will retry: subnet is taken
	I1025 21:35:47.060506   20615 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230] amended:false}} dirty:map[] misses:0}
	I1025 21:35:47.060524   20615 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:47.060717   20615 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230] amended:true}} dirty:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838] misses:0}
	I1025 21:35:47.060729   20615 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:47.060739   20615 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:35:47.060814   20615 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	W1025 21:35:47.121298   20615 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431 returned with exit code 1
	W1025 21:35:47.121331   20615 network_create.go:107] failed to create docker network embed-certs-213431 192.168.58.0/24, will retry: subnet is taken
	I1025 21:35:47.121596   20615 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230] amended:true}} dirty:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838] misses:1}
	I1025 21:35:47.121614   20615 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:47.121809   20615 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230] amended:true}} dirty:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838 192.168.67.0:0xc00048ea68] misses:1}
	I1025 21:35:47.121819   20615 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:47.121826   20615 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:35:47.121892   20615 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	I1025 21:35:47.213066   20615 network_create.go:99] docker network embed-certs-213431 192.168.67.0/24 created
	I1025 21:35:47.213096   20615 kic.go:106] calculated static IP "192.168.67.2" for the "embed-certs-213431" container
	I1025 21:35:47.213202   20615 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:35:47.275465   20615 cli_runner.go:164] Run: docker volume create embed-certs-213431 --label name.minikube.sigs.k8s.io=embed-certs-213431 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:35:47.336193   20615 oci.go:103] Successfully created a docker volume embed-certs-213431
	I1025 21:35:47.336318   20615 cli_runner.go:164] Run: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:35:47.477733   20615 cli_runner.go:211] docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:35:47.477790   20615 client.go:171] LocalClient.Create took 687.148256ms
	I1025 21:35:49.480189   20615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:49.480294   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:49.542905   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:49.542986   20615 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:49.694647   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:49.758303   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:49.758389   20615 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:50.059511   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:50.123082   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:50.123164   20615 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:50.695387   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:50.756388   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:50.756486   20615 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:50.756517   20615 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:50.756589   20615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:50.756629   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:50.817041   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:50.817121   20615 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:50.998009   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:51.061744   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:51.061824   20615 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:51.394429   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:51.458221   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:51.458300   20615 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:51.918634   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:51.982048   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:51.982152   20615 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:51.982187   20615 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:51.982195   20615 start.go:128] duration metric: createHost completed in 5.214080484s
	I1025 21:35:51.982256   20615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:51.982298   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:52.043199   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:52.043284   20615 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:52.239276   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:52.300622   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:52.300717   20615 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:52.598376   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:52.659828   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:52.659911   20615 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:53.325548   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:53.387913   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:53.387998   20615 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:53.388028   20615 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:53.388078   20615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:53.388119   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:53.449371   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:53.449444   20615 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:53.625684   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:53.691378   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:53.691475   20615 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:54.016712   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:54.083428   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:35:54.083508   20615 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:54.686368   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:35:54.748804   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:35:54.748898   20615 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:35:54.748916   20615 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:54.748938   20615 fix.go:57] fixHost completed within 26.003330048s
	I1025 21:35:54.748946   20615 start.go:83] releasing machines lock for "embed-certs-213431", held for 26.003371669s
	W1025 21:35:54.748959   20615 start.go:603] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W1025 21:35:54.749098   20615 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:35:54.749109   20615 start.go:618] Will try again in 5 seconds ...
	I1025 21:35:59.751336   20615 start.go:364] acquiring machines lock for embed-certs-213431: {Name:mke416081ef15e84157f62e109b2319d3307f98a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:59.751484   20615 start.go:368] acquired machines lock for "embed-certs-213431" in 116.609µs
	I1025 21:35:59.751515   20615 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:35:59.751523   20615 fix.go:55] fixHost starting: 
	I1025 21:35:59.751911   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:59.818694   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:59.818734   20615 fix.go:103] recreateIfNeeded on embed-certs-213431: state= err=unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:59.818748   20615 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:35:59.840757   20615 out.go:177] * docker "embed-certs-213431" container is missing, will recreate.
	I1025 21:35:59.885025   20615 delete.go:124] DEMOLISHING embed-certs-213431 ...
	I1025 21:35:59.885170   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:35:59.945656   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:59.945691   20615 stop.go:75] unable to get state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:59.945702   20615 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:35:59.946054   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:00.006244   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:00.006290   20615 delete.go:82] Unable to get host status for embed-certs-213431, assuming it has already been deleted: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:00.006355   20615 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-213431
	W1025 21:36:00.066858   20615 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-213431 returned with exit code 1
	I1025 21:36:00.066883   20615 kic.go:356] could not find the container embed-certs-213431 to remove it. will try anyways
	I1025 21:36:00.066938   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:00.126626   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	W1025 21:36:00.126663   20615 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:00.126748   20615 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0"
	W1025 21:36:00.186882   20615 cli_runner.go:211] docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:36:00.186916   20615 oci.go:646] error shutdown embed-certs-213431: docker exec --privileged -t embed-certs-213431 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:01.188516   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:01.250573   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:01.250616   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:01.250627   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:01.250644   20615 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:01.649603   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:01.714881   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:01.714923   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:01.714935   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:01.714955   20615 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:02.313091   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:02.377413   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:02.377464   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:02.377486   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:02.377510   20615 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:03.788275   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:03.852676   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:03.852728   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:03.852739   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:03.852758   20615 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:05.047455   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:05.109801   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:05.109847   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:05.109858   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:05.109878   20615 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:08.566646   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:08.634036   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:08.634082   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:08.634094   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:08.634113   20615 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:13.180329   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:13.309048   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:13.309099   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:13.309110   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:13.309130   20615 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:19.140914   20615 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:19.206208   20615 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:19.206247   20615 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:19.206257   20615 oci.go:660] temporary error: container embed-certs-213431 status is  but expect it to be exited
	I1025 21:36:19.206282   20615 oci.go:88] couldn't shut down embed-certs-213431 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	 
	I1025 21:36:19.206362   20615 cli_runner.go:164] Run: docker rm -f -v embed-certs-213431
	I1025 21:36:19.269531   20615 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-213431
	W1025 21:36:19.329188   20615 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-213431 returned with exit code 1
	I1025 21:36:19.329292   20615 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:36:19.389985   20615 cli_runner.go:164] Run: docker network rm embed-certs-213431
	W1025 21:36:19.499450   20615 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:36:19.499467   20615 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:36:20.499511   20615 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:36:20.541944   20615 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:36:20.542057   20615 start.go:159] libmachine.API.Create for "embed-certs-213431" (driver="docker")
	I1025 21:36:20.542076   20615 client.go:168] LocalClient.Create starting
	I1025 21:36:20.542159   20615 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:36:20.542198   20615 main.go:134] libmachine: Decoding PEM data...
	I1025 21:36:20.542212   20615 main.go:134] libmachine: Parsing certificate...
	I1025 21:36:20.542253   20615 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:36:20.542275   20615 main.go:134] libmachine: Decoding PEM data...
	I1025 21:36:20.542282   20615 main.go:134] libmachine: Parsing certificate...
	I1025 21:36:20.542621   20615 cli_runner.go:164] Run: docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:36:20.603971   20615 cli_runner.go:211] docker network inspect embed-certs-213431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:36:20.604047   20615 network_create.go:272] running [docker network inspect embed-certs-213431] to gather additional debugging logs...
	I1025 21:36:20.604069   20615 cli_runner.go:164] Run: docker network inspect embed-certs-213431
	W1025 21:36:20.666267   20615 cli_runner.go:211] docker network inspect embed-certs-213431 returned with exit code 1
	I1025 21:36:20.666290   20615 network_create.go:275] error running [docker network inspect embed-certs-213431]: docker network inspect embed-certs-213431: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-213431
	I1025 21:36:20.666304   20615 network_create.go:277] output of [docker network inspect embed-certs-213431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-213431
	
	** /stderr **
	I1025 21:36:20.666402   20615 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:36:20.727088   20615 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230] amended:true}} dirty:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838 192.168.67.0:0xc00048ea68] misses:1}
	I1025 21:36:20.727122   20615 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:20.727325   20615 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230] amended:true}} dirty:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838 192.168.67.0:0xc00048ea68] misses:2}
	I1025 21:36:20.727335   20615 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:20.727537   20615 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838 192.168.67.0:0xc00048ea68] amended:false}} dirty:map[] misses:0}
	I1025 21:36:20.727545   20615 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:20.727735   20615 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838 192.168.67.0:0xc00048ea68] amended:true}} dirty:map[192.168.49.0:0xc0005b4230 192.168.58.0:0xc00040a838 192.168.67.0:0xc00048ea68 192.168.76.0:0xc00040a0a0] misses:0}
	I1025 21:36:20.727750   20615 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:20.727757   20615 network_create.go:115] attempt to create docker network embed-certs-213431 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:36:20.727821   20615 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213431 embed-certs-213431
	I1025 21:36:20.817245   20615 network_create.go:99] docker network embed-certs-213431 192.168.76.0/24 created
	I1025 21:36:20.817270   20615 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-213431" container
	I1025 21:36:20.817377   20615 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:36:20.880402   20615 cli_runner.go:164] Run: docker volume create embed-certs-213431 --label name.minikube.sigs.k8s.io=embed-certs-213431 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:36:20.940935   20615 oci.go:103] Successfully created a docker volume embed-certs-213431
	I1025 21:36:20.941059   20615 cli_runner.go:164] Run: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:36:21.075834   20615 cli_runner.go:211] docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:36:21.075881   20615 client.go:171] LocalClient.Create took 533.799016ms
	I1025 21:36:23.078413   20615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:36:23.078528   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:23.141075   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:23.141172   20615 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:23.307469   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:23.371937   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:23.372025   20615 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:23.787598   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:23.854417   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:23.854506   20615 retry.go:31] will retry after 829.823411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:24.686194   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:24.750136   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:36:24.750228   20615 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:36:24.750249   20615 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:24.750311   20615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:36:24.750368   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:24.810382   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:24.810482   20615 retry.go:31] will retry after 273.70215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:25.086509   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:25.151061   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:25.151142   20615 retry.go:31] will retry after 209.670244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:25.363171   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:25.426725   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:25.426807   20615 retry.go:31] will retry after 670.513831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:26.097736   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:26.161034   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:36:26.161121   20615 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:36:26.161145   20615 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:26.161171   20615 start.go:128] duration metric: createHost completed in 5.661620676s
	I1025 21:36:26.161236   20615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:36:26.161275   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:26.221833   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:26.221917   20615 retry.go:31] will retry after 168.316559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:26.390621   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:26.454205   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:26.454286   20615 retry.go:31] will retry after 390.412446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:26.847167   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:26.913411   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:26.913526   20615 retry.go:31] will retry after 587.33751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:27.501800   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:27.568481   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:36:27.568574   20615 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:36:27.568595   20615 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:27.568658   20615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:36:27.568728   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:27.629265   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:27.629405   20615 retry.go:31] will retry after 230.78805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:27.860673   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:27.924144   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:27.924237   20615 retry.go:31] will retry after 386.469643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:28.313093   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:28.378604   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:28.378683   20615 retry.go:31] will retry after 423.866531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:28.804692   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:28.865446   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	I1025 21:36:28.865527   20615 retry.go:31] will retry after 659.880839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:29.527752   20615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431
	W1025 21:36:29.590371   20615 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431 returned with exit code 1
	W1025 21:36:29.590459   20615 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:36:29.590489   20615 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-213431": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213431: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	I1025 21:36:29.590501   20615 fix.go:57] fixHost completed within 29.838883079s
	I1025 21:36:29.590509   20615 start.go:83] releasing machines lock for "embed-certs-213431", held for 29.838918412s
	W1025 21:36:29.590683   20615 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-213431" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-213431" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:36:29.634240   20615 out.go:177] 
	W1025 21:36:29.656345   20615 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-213431 container: docker run --rm --name embed-certs-213431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213431 --entrypoint /usr/bin/test -v embed-certs-213431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:36:29.656376   20615 out.go:239] * 
	* 
	W1025 21:36:29.657564   20615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:36:29.740902   20615 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p embed-certs-213431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "14260a1c085002450b4b15c4d962abf1c775e55798082f27876fc3c73b239585",
	        "Created": "2022-10-26T04:36:20.793979296Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (113.715334ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:29.958107   21071 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (62.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (114.47807ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:28.542044   20638 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-213432 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "0cd271c91eee03a9946ccb0618b4f9ff0bbbb7f45a931c6b753e2f1b34610592",
	        "Created": "2022-10-26T04:35:03.712111532Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (115.166291ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:35:29.123864   20662 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-213432 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-213432 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (1m1.761579921s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-213432] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-diff-port-213432 in cluster default-k8s-diff-port-213432
	* Pulling base image ...
	* docker "default-k8s-diff-port-213432" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-diff-port-213432" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:35:29.176473   20669 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:35:29.176636   20669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:29.176641   20669 out.go:309] Setting ErrFile to fd 2...
	I1025 21:35:29.176645   20669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:29.177237   20669 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:35:29.178061   20669 out.go:303] Setting JSON to false
	I1025 21:35:29.193622   20669 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5698,"bootTime":1666753231,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:35:29.193722   20669 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:35:29.215235   20669 out.go:177] * [default-k8s-diff-port-213432] minikube v1.27.1 on Darwin 12.6
	I1025 21:35:29.279811   20669 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:35:29.258149   20669 notify.go:220] Checking for updates...
	I1025 21:35:29.323145   20669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:35:29.365108   20669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:35:29.386995   20669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:35:29.408408   20669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:35:29.430771   20669 config.go:180] Loaded profile config "default-k8s-diff-port-213432": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:35:29.431384   20669 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:35:29.497719   20669 docker.go:137] docker version: linux-20.10.17
	I1025 21:35:29.497895   20669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:35:29.625353   20669 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:35:29.569786013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:35:29.668749   20669 out.go:177] * Using the docker driver based on existing profile
	I1025 21:35:29.690143   20669 start.go:282] selected driver: docker
	I1025 21:35:29.690171   20669 start.go:808] validating driver "docker" against &{Name:default-k8s-diff-port-213432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-213432 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:35:29.690307   20669 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:35:29.693670   20669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:35:29.821387   20669 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:35:29.766255498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:35:29.821570   20669 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:35:29.821592   20669 cni.go:95] Creating CNI manager for ""
	I1025 21:35:29.821601   20669 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:35:29.821612   20669 start_flags.go:317] config:
	{Name:default-k8s-diff-port-213432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-213432 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:35:29.865093   20669 out.go:177] * Starting control plane node default-k8s-diff-port-213432 in cluster default-k8s-diff-port-213432
	I1025 21:35:29.886436   20669 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:35:29.908400   20669 out.go:177] * Pulling base image ...
	I1025 21:35:29.930393   20669 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:35:29.930408   20669 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:35:29.930542   20669 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:35:29.930569   20669 cache.go:57] Caching tarball of preloaded images
	I1025 21:35:29.931254   20669 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:35:29.931404   20669 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:35:29.931872   20669 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/default-k8s-diff-port-213432/config.json ...
	I1025 21:35:29.995223   20669 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:35:29.995241   20669 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:35:29.995250   20669 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:35:29.995296   20669 start.go:364] acquiring machines lock for default-k8s-diff-port-213432: {Name:mkfae46218f26a8df96ce623e68a2e2d4ae3bab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:29.995383   20669 start.go:368] acquired machines lock for "default-k8s-diff-port-213432" in 62.841µs
	I1025 21:35:29.995401   20669 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:35:29.995410   20669 fix.go:55] fixHost starting: 
	I1025 21:35:29.995632   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:30.056334   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:30.056387   20669 fix.go:103] recreateIfNeeded on default-k8s-diff-port-213432: state= err=unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:30.056413   20669 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:35:30.078543   20669 out.go:177] * docker "default-k8s-diff-port-213432" container is missing, will recreate.
	I1025 21:35:30.100340   20669 delete.go:124] DEMOLISHING default-k8s-diff-port-213432 ...
	I1025 21:35:30.100512   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:30.161599   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:30.161661   20669 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:30.161680   20669 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:30.162017   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:30.222930   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:30.222997   20669 delete.go:82] Unable to get host status for default-k8s-diff-port-213432, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:30.223071   20669 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-213432
	W1025 21:35:30.283731   20669 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:30.283759   20669 kic.go:356] could not find the container default-k8s-diff-port-213432 to remove it. will try anyways
	I1025 21:35:30.283846   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:30.351828   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:35:30.351870   20669 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:30.351970   20669 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0"
	W1025 21:35:30.413175   20669 cli_runner.go:211] docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:35:30.413197   20669 oci.go:646] error shutdown default-k8s-diff-port-213432: docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:31.415662   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:31.482188   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:31.482233   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:31.482245   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:31.482285   20669 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:32.035265   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:32.099881   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:32.099920   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:32.099931   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:32.099960   20669 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:33.182718   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:33.247078   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:33.247131   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:33.247145   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:33.247168   20669 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:34.559613   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:34.621751   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:34.621792   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:34.621817   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:34.621836   20669 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:36.204603   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:36.268265   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:36.268304   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:36.268316   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:36.268338   20669 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:38.611157   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:38.675571   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:38.675647   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:38.675664   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:38.675686   20669 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:43.184321   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:43.247423   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:43.247473   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:43.247483   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:43.247504   20669 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:46.471424   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:35:46.537692   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:35:46.537729   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:46.537739   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:35:46.537766   20669 oci.go:88] couldn't shut down default-k8s-diff-port-213432 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	 
	I1025 21:35:46.537835   20669 cli_runner.go:164] Run: docker rm -f -v default-k8s-diff-port-213432
	I1025 21:35:46.600660   20669 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-213432
	W1025 21:35:46.660433   20669 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:46.660552   20669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:46.721202   20669 cli_runner.go:164] Run: docker network rm default-k8s-diff-port-213432
	W1025 21:35:46.867893   20669 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:35:46.867911   20669 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:35:47.870021   20669 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:35:47.913194   20669 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:35:47.913391   20669 start.go:159] libmachine.API.Create for "default-k8s-diff-port-213432" (driver="docker")
	I1025 21:35:47.913425   20669 client.go:168] LocalClient.Create starting
	I1025 21:35:47.913568   20669 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:35:47.913639   20669 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:47.913670   20669 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:47.913784   20669 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:35:47.913828   20669 main.go:134] libmachine: Decoding PEM data...
	I1025 21:35:47.913847   20669 main.go:134] libmachine: Parsing certificate...
	I1025 21:35:47.914539   20669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:35:47.979477   20669 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:35:47.979561   20669 network_create.go:272] running [docker network inspect default-k8s-diff-port-213432] to gather additional debugging logs...
	I1025 21:35:47.979584   20669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432
	W1025 21:35:48.041656   20669 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:48.041679   20669 network_create.go:275] error running [docker network inspect default-k8s-diff-port-213432]: docker network inspect default-k8s-diff-port-213432: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-213432
	I1025 21:35:48.041701   20669 network_create.go:277] output of [docker network inspect default-k8s-diff-port-213432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-213432
	
	** /stderr **
	I1025 21:35:48.041786   20669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:35:48.102010   20669 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00037ca28] misses:0}
	I1025 21:35:48.102049   20669 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.102062   20669 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:35:48.102159   20669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	W1025 21:35:48.162552   20669 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:48.162599   20669 network_create.go:107] failed to create docker network default-k8s-diff-port-213432 192.168.49.0/24, will retry: subnet is taken
	I1025 21:35:48.162867   20669 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:false}} dirty:map[] misses:0}
	I1025 21:35:48.162885   20669 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.163094   20669 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800] misses:0}
	I1025 21:35:48.163106   20669 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.163117   20669 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:35:48.163178   20669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	W1025 21:35:48.223835   20669 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:48.223866   20669 network_create.go:107] failed to create docker network default-k8s-diff-port-213432 192.168.58.0/24, will retry: subnet is taken
	I1025 21:35:48.224171   20669 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800] misses:1}
	I1025 21:35:48.224189   20669 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.224401   20669 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838] misses:1}
	I1025 21:35:48.224415   20669 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.224422   20669 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:35:48.224491   20669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	W1025 21:35:48.285561   20669 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:48.285617   20669 network_create.go:107] failed to create docker network default-k8s-diff-port-213432 192.168.67.0/24, will retry: subnet is taken
	I1025 21:35:48.285869   20669 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838] misses:2}
	I1025 21:35:48.285903   20669 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.286096   20669 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0] misses:2}
	I1025 21:35:48.286108   20669 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:35:48.286115   20669 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:35:48.286176   20669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	I1025 21:35:48.375437   20669 network_create.go:99] docker network default-k8s-diff-port-213432 192.168.76.0/24 created
	I1025 21:35:48.375472   20669 kic.go:106] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-213432" container
	I1025 21:35:48.375578   20669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:35:48.436778   20669 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-213432 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:35:48.497771   20669 oci.go:103] Successfully created a docker volume default-k8s-diff-port-213432
	I1025 21:35:48.497892   20669 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:35:48.637339   20669 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:35:48.637390   20669 client.go:171] LocalClient.Create took 723.951973ms
	I1025 21:35:50.638712   20669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:50.638815   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:50.705585   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:50.705682   20669 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:50.857261   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:50.920640   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:50.920747   20669 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:51.222209   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:51.288380   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:51.288466   20669 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:51.860910   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:51.927832   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:51.927928   20669 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:51.927948   20669 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:51.928000   20669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:51.928045   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:51.989127   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:51.989212   20669 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:52.168146   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:52.230243   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:52.230346   20669 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:52.562960   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:52.627792   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:52.627886   20669 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:53.089007   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:53.152223   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:53.152327   20669 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:53.152353   20669 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:53.152361   20669 start.go:128] duration metric: createHost completed in 5.282281044s
	I1025 21:35:53.152454   20669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:35:53.152519   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:53.212763   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:53.212855   20669 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:53.409108   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:53.470219   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:53.470321   20669 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:53.769525   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:53.831416   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:53.831501   20669 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:54.497185   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:54.560653   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:54.560745   20669 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:54.560763   20669 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:54.560820   20669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:35:54.560870   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:54.620850   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:54.620926   20669 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:54.799059   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:54.860294   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:54.860379   20669 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:55.184791   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:55.247007   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:35:55.247087   20669 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:55.851755   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:35:55.917505   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:35:55.917598   20669 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:35:55.917625   20669 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:35:55.917634   20669 fix.go:57] fixHost completed within 25.922142729s
	I1025 21:35:55.917641   20669 start.go:83] releasing machines lock for "default-k8s-diff-port-213432", held for 25.922168328s
	W1025 21:35:55.917654   20669 start.go:603] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W1025 21:35:55.917795   20669 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:35:55.917807   20669 start.go:618] Will try again in 5 seconds ...
	I1025 21:36:00.920125   20669 start.go:364] acquiring machines lock for default-k8s-diff-port-213432: {Name:mkfae46218f26a8df96ce623e68a2e2d4ae3bab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:36:00.920263   20669 start.go:368] acquired machines lock for "default-k8s-diff-port-213432" in 107.106µs
	I1025 21:36:00.920294   20669 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:36:00.920302   20669 fix.go:55] fixHost starting: 
	I1025 21:36:00.920658   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:00.984976   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:00.985019   20669 fix.go:103] recreateIfNeeded on default-k8s-diff-port-213432: state= err=unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:00.985042   20669 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:36:01.028234   20669 out.go:177] * docker "default-k8s-diff-port-213432" container is missing, will recreate.
	I1025 21:36:01.049469   20669 delete.go:124] DEMOLISHING default-k8s-diff-port-213432 ...
	I1025 21:36:01.049724   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:01.111658   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:36:01.111695   20669 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:01.111714   20669 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:01.112057   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:01.172295   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:01.172343   20669 delete.go:82] Unable to get host status for default-k8s-diff-port-213432, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:01.172424   20669 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-213432
	W1025 21:36:01.233354   20669 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:01.233380   20669 kic.go:356] could not find the container default-k8s-diff-port-213432 to remove it. will try anyways
	I1025 21:36:01.233469   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:01.294515   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	W1025 21:36:01.294551   20669 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:01.294634   20669 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0"
	W1025 21:36:01.355327   20669 cli_runner.go:211] docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:36:01.355351   20669 oci.go:646] error shutdown default-k8s-diff-port-213432: docker exec --privileged -t default-k8s-diff-port-213432 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:02.355935   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:02.419243   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:02.419308   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:02.419319   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:02.419339   20669 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:02.818300   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:02.881735   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:02.881784   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:02.881797   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:02.881831   20669 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:03.482025   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:03.547031   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:03.547091   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:03.547103   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:03.547123   20669 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:04.956757   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:05.021387   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:05.021426   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:05.021438   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:05.021470   20669 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:06.214157   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:06.280497   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:06.280541   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:06.280554   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:06.280574   20669 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:09.736950   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:09.802644   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:09.802695   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:09.802718   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:09.802739   20669 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:14.346875   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:14.412428   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:14.412480   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:14.412497   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:14.412519   20669 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:20.245396   20669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:20.311200   20669 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:20.311239   20669 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:20.311250   20669 oci.go:660] temporary error: container default-k8s-diff-port-213432 status is  but expect it to be exited
	I1025 21:36:20.311275   20669 oci.go:88] couldn't shut down default-k8s-diff-port-213432 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	 
	I1025 21:36:20.311335   20669 cli_runner.go:164] Run: docker rm -f -v default-k8s-diff-port-213432
	I1025 21:36:20.374643   20669 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-213432
	W1025 21:36:20.436412   20669 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:20.436521   20669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:36:20.498362   20669 cli_runner.go:164] Run: docker network rm default-k8s-diff-port-213432
	W1025 21:36:20.618755   20669 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:36:20.618774   20669 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:36:21.619005   20669 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:36:21.644155   20669 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:36:21.644405   20669 start.go:159] libmachine.API.Create for "default-k8s-diff-port-213432" (driver="docker")
	I1025 21:36:21.644430   20669 client.go:168] LocalClient.Create starting
	I1025 21:36:21.644608   20669 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:36:21.644677   20669 main.go:134] libmachine: Decoding PEM data...
	I1025 21:36:21.644705   20669 main.go:134] libmachine: Parsing certificate...
	I1025 21:36:21.644787   20669 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:36:21.644832   20669 main.go:134] libmachine: Decoding PEM data...
	I1025 21:36:21.644846   20669 main.go:134] libmachine: Parsing certificate...
	I1025 21:36:21.645561   20669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:36:21.709864   20669 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:36:21.709959   20669 network_create.go:272] running [docker network inspect default-k8s-diff-port-213432] to gather additional debugging logs...
	I1025 21:36:21.709977   20669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213432
	W1025 21:36:21.772460   20669 cli_runner.go:211] docker network inspect default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:21.772481   20669 network_create.go:275] error running [docker network inspect default-k8s-diff-port-213432]: docker network inspect default-k8s-diff-port-213432: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-213432
	I1025 21:36:21.772502   20669 network_create.go:277] output of [docker network inspect default-k8s-diff-port-213432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-213432
	
	** /stderr **
	I1025 21:36:21.772571   20669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:36:21.833499   20669 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0] misses:2}
	I1025 21:36:21.833535   20669 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:21.833738   20669 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0] misses:3}
	I1025 21:36:21.833748   20669 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:21.833964   20669 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0] amended:false}} dirty:map[] misses:0}
	I1025 21:36:21.833972   20669 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:21.834160   20669 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0] amended:false}} dirty:map[] misses:0}
	I1025 21:36:21.834170   20669 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:21.834383   20669 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0] amended:true}} dirty:map[192.168.49.0:0xc00037ca28 192.168.58.0:0xc00074c800 192.168.67.0:0xc00074c838 192.168.76.0:0xc0004a01c0 192.168.85.0:0xc00037c638] misses:0}
	I1025 21:36:21.834403   20669 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:21.834410   20669 network_create.go:115] attempt to create docker network default-k8s-diff-port-213432 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 21:36:21.834483   20669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 default-k8s-diff-port-213432
	I1025 21:36:21.927124   20669 network_create.go:99] docker network default-k8s-diff-port-213432 192.168.85.0/24 created
	I1025 21:36:21.927166   20669 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-213432" container
	I1025 21:36:21.927280   20669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:36:21.989330   20669 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-213432 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:36:22.049579   20669 oci.go:103] Successfully created a docker volume default-k8s-diff-port-213432
	I1025 21:36:22.049706   20669 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:36:22.186166   20669 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:36:22.186223   20669 client.go:171] LocalClient.Create took 541.785126ms
	I1025 21:36:24.188623   20669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:36:24.199426   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:24.263556   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:24.263674   20669 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:24.430594   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:24.496404   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:24.496491   20669 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:24.914062   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:24.980714   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:24.980808   20669 retry.go:31] will retry after 829.823411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:25.813009   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:25.878541   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:36:25.878639   20669 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:36:25.878661   20669 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:25.878715   20669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:36:25.878761   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:25.940462   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:25.940544   20669 retry.go:31] will retry after 273.70215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:26.214702   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:26.274612   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:26.274721   20669 retry.go:31] will retry after 209.670244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:26.485237   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:26.547426   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:26.547516   20669 retry.go:31] will retry after 670.513831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:27.220423   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:27.287666   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:36:27.287775   20669 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:36:27.287795   20669 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:27.287819   20669 start.go:128] duration metric: createHost completed in 5.668681021s
	I1025 21:36:27.287884   20669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:36:27.287932   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:27.348228   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:27.348313   20669 retry.go:31] will retry after 168.316559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:27.517611   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:27.582678   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:27.582767   20669 retry.go:31] will retry after 390.412446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:27.975495   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:28.038302   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:28.038390   20669 retry.go:31] will retry after 587.33751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:28.626923   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:28.692440   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:36:28.692534   20669 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:36:28.692571   20669 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:28.692623   20669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:36:28.692669   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:28.752600   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:28.752687   20669 retry.go:31] will retry after 230.78805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:28.984306   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:29.046762   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:29.046862   20669 retry.go:31] will retry after 386.469643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:29.435729   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:29.502578   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:29.502666   20669 retry.go:31] will retry after 423.866531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:29.928688   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:29.992575   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	I1025 21:36:29.992667   20669 retry.go:31] will retry after 659.880839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:30.652671   20669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432
	W1025 21:36:30.714993   20669 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432 returned with exit code 1
	W1025 21:36:30.715086   20669 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:36:30.715105   20669 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-213432": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213432: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	I1025 21:36:30.715112   20669 fix.go:57] fixHost completed within 29.794715187s
	I1025 21:36:30.715119   20669 start.go:83] releasing machines lock for "default-k8s-diff-port-213432", held for 29.794748954s
	W1025 21:36:30.715283   20669 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-213432" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-213432" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:36:30.757758   20669 out.go:177] 
	W1025 21:36:30.779051   20669 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-213432 container: docker run --rm --name default-k8s-diff-port-213432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213432 --entrypoint /usr/bin/test -v default-k8s-diff-port-213432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:36:30.779092   20669 out.go:239] * 
	* 
	W1025 21:36:30.780573   20669 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:36:30.847595   20669 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-213432 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "bc10a1a69904f5d1e3ca2518f136955116cca3892d02c5c76bbda980f513bf71",
	        "Created": "2022-10-26T04:36:21.902549041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (134.268346ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:31.113128   21110 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-213431" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "14260a1c085002450b4b15c4d962abf1c775e55798082f27876fc3c73b239585",
	        "Created": "2022-10-26T04:36:20.793979296Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (111.160931ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:30.134276   21079 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-213431" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-213431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-213431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (32.586039ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-213431" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-213431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "14260a1c085002450b4b15c4d962abf1c775e55798082f27876fc3c73b239585",
	        "Created": "2022-10-26T04:36:20.793979296Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (112.325645ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:30.344319   21086 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-213431 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p embed-certs-213431 "sudo crictl images -o json": exit status 80 (202.402923ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p embed-certs-213431 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.3",
- 	"registry.k8s.io/kube-controller-manager:v1.25.3",
- 	"registry.k8s.io/kube-proxy:v1.25.3",
- 	"registry.k8s.io/kube-scheduler:v1.25.3",
- 	"registry.k8s.io/pause:3.8",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "14260a1c085002450b4b15c4d962abf1c775e55798082f27876fc3c73b239585",
	        "Created": "2022-10-26T04:36:20.793979296Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (269.793177ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:30.849197   21098 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-213431 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p embed-certs-213431 --alsologtostderr -v=1: exit status 80 (208.561288ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:36:30.939946   21104 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:36:30.940154   21104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:36:30.940159   21104 out.go:309] Setting ErrFile to fd 2...
	I1025 21:36:30.940162   21104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:36:30.940283   21104 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:36:30.940586   21104 out.go:303] Setting JSON to false
	I1025 21:36:30.940602   21104 mustload.go:65] Loading cluster: embed-certs-213431
	I1025 21:36:30.940876   21104 config.go:180] Loaded profile config "embed-certs-213431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:36:30.941208   21104 cli_runner.go:164] Run: docker container inspect embed-certs-213431 --format={{.State.Status}}
	W1025 21:36:31.003674   21104 cli_runner.go:211] docker container inspect embed-certs-213431 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:31.025650   21104 out.go:177] 
	W1025 21:36:31.046755   21104 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431
	
	W1025 21:36:31.046780   21104 out.go:239] * 
	* 
	W1025 21:36:31.050699   21104 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:36:31.071285   21104 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p embed-certs-213431 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "14260a1c085002450b4b15c4d962abf1c775e55798082f27876fc3c73b239585",
	        "Created": "2022-10-26T04:36:20.793979296Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (117.091954ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:31.277982   21118 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-213431

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:235: (dbg) docker inspect embed-certs-213431:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-213431",
	        "Id": "14260a1c085002450b4b15c4d962abf1c775e55798082f27876fc3c73b239585",
	        "Created": "2022-10-26T04:36:20.793979296Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-213431"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-213431 -n embed-certs-213431: exit status 7 (117.678297ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:31.463324   21130 status.go:249] status error: host: state: unknown state "embed-certs-213431": docker container inspect embed-certs-213431 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-213431

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-213431" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-213432" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "bc10a1a69904f5d1e3ca2518f136955116cca3892d02c5c76bbda980f513bf71",
	        "Created": "2022-10-26T04:36:21.902549041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (115.792861ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:31.294748   21119 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-213432" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-213432 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-213432 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (33.822019ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-213432" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-213432 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "bc10a1a69904f5d1e3ca2518f136955116cca3892d02c5c76bbda980f513bf71",
	        "Created": "2022-10-26T04:36:21.902549041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (115.253745ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:31.511285   21133 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-213432 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-213432 "sudo crictl images -o json": exit status 80 (213.245844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-213432 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.3",
- 	"registry.k8s.io/kube-controller-manager:v1.25.3",
- 	"registry.k8s.io/kube-proxy:v1.25.3",
- 	"registry.k8s.io/kube-scheduler:v1.25.3",
- 	"registry.k8s.io/pause:3.8",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "bc10a1a69904f5d1e3ca2518f136955116cca3892d02c5c76bbda980f513bf71",
	        "Created": "2022-10-26T04:36:21.902549041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (114.71887ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:31.907872   21155 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-213432 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-213432 --alsologtostderr -v=1: exit status 80 (220.005326ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:36:31.960634   21163 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:36:31.960852   21163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:36:31.960857   21163 out.go:309] Setting ErrFile to fd 2...
	I1025 21:36:31.960861   21163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:36:31.960972   21163 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:36:31.961275   21163 out.go:303] Setting JSON to false
	I1025 21:36:31.961291   21163 mustload.go:65] Loading cluster: default-k8s-diff-port-213432
	I1025 21:36:31.961590   21163 config.go:180] Loaded profile config "default-k8s-diff-port-213432": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:36:31.961925   21163 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}
	W1025 21:36:32.023637   21163 cli_runner.go:211] docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:32.045093   21163 out.go:177] 
	W1025 21:36:32.065259   21163 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432
	
	W1025 21:36:32.065276   21163 out.go:239] * 
	* 
	W1025 21:36:32.067849   21163 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:36:32.089375   21163 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-213432 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "bc10a1a69904f5d1e3ca2518f136955116cca3892d02c5c76bbda980f513bf71",
	        "Created": "2022-10-26T04:36:21.902549041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (200.230921ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:32.394952   21174 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-213432

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-213432:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-213432",
	        "Id": "bc10a1a69904f5d1e3ca2518f136955116cca3892d02c5c76bbda980f513bf71",
	        "Created": "2022-10-26T04:36:21.902549041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213432"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213432 -n default-k8s-diff-port-213432: exit status 7 (158.253721ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:36:32.620881   21185 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-213432": docker container inspect default-k8s-diff-port-213432 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-213432

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-213432" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-213632 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-213632 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (39.445591476s)

                                                
                                                
-- stdout --
	* [newest-cni-213632] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node newest-cni-213632 in cluster newest-cni-213632
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-213632" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:36:32.998425   21210 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:36:32.998612   21210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:36:32.998617   21210 out.go:309] Setting ErrFile to fd 2...
	I1025 21:36:32.998620   21210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:36:32.998740   21210 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:36:32.999245   21210 out.go:303] Setting JSON to false
	I1025 21:36:33.014762   21210 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5762,"bootTime":1666753231,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:36:33.014858   21210 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:36:33.036515   21210 out.go:177] * [newest-cni-213632] minikube v1.27.1 on Darwin 12.6
	I1025 21:36:33.057799   21210 notify.go:220] Checking for updates...
	I1025 21:36:33.079280   21210 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:36:33.122379   21210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:36:33.164336   21210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:36:33.190470   21210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:36:33.232552   21210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:36:33.254411   21210 config.go:180] Loaded profile config "default-k8s-diff-port-213432": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:36:33.254564   21210 config.go:180] Loaded profile config "missing-upgrade-205231": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 21:36:33.254651   21210 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:36:33.330455   21210 docker.go:137] docker version: linux-20.10.17
	I1025 21:36:33.330622   21210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:36:33.461382   21210 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:41 SystemTime:2022-10-26 04:36:33.406915258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:36:33.504515   21210 out.go:177] * Using the docker driver based on user configuration
	I1025 21:36:33.526419   21210 start.go:282] selected driver: docker
	I1025 21:36:33.526451   21210 start.go:808] validating driver "docker" against <nil>
	I1025 21:36:33.526477   21210 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:36:33.530449   21210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:36:33.682596   21210 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:40 SystemTime:2022-10-26 04:36:33.60845982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:36:33.682688   21210 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	W1025 21:36:33.682707   21210 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 21:36:33.682872   21210 start_flags.go:907] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 21:36:33.725940   21210 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 21:36:33.747732   21210 cni.go:95] Creating CNI manager for ""
	I1025 21:36:33.747789   21210 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:36:33.747817   21210 start_flags.go:317] config:
	{Name:newest-cni-213632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-213632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:36:33.769904   21210 out.go:177] * Starting control plane node newest-cni-213632 in cluster newest-cni-213632
	I1025 21:36:33.812537   21210 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:36:33.833720   21210 out.go:177] * Pulling base image ...
	I1025 21:36:33.876754   21210 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:36:33.876792   21210 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:36:33.876832   21210 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:36:33.876853   21210 cache.go:57] Caching tarball of preloaded images
	I1025 21:36:33.877077   21210 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:36:33.877099   21210 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:36:33.878075   21210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/newest-cni-213632/config.json ...
	I1025 21:36:33.878192   21210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/newest-cni-213632/config.json: {Name:mkde0ba0ca6e85c12ee1240711117ddf6aecbd1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:33.940561   21210 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:36:33.940581   21210 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:36:33.940590   21210 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:36:33.940657   21210 start.go:364] acquiring machines lock for newest-cni-213632: {Name:mka889bd722023c4163394a0f9f321da2cc04c3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:36:33.940800   21210 start.go:368] acquired machines lock for "newest-cni-213632" in 131.128µs
	I1025 21:36:33.940824   21210 start.go:93] Provisioning new machine with config: &{Name:newest-cni-213632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-213632 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 21:36:33.940893   21210 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:36:33.983413   21210 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:36:33.983619   21210 start.go:159] libmachine.API.Create for "newest-cni-213632" (driver="docker")
	I1025 21:36:33.983644   21210 client.go:168] LocalClient.Create starting
	I1025 21:36:33.983701   21210 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:36:33.983732   21210 main.go:134] libmachine: Decoding PEM data...
	I1025 21:36:33.983747   21210 main.go:134] libmachine: Parsing certificate...
	I1025 21:36:33.983800   21210 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:36:33.983823   21210 main.go:134] libmachine: Decoding PEM data...
	I1025 21:36:33.983831   21210 main.go:134] libmachine: Parsing certificate...
	I1025 21:36:33.984276   21210 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:36:34.047507   21210 cli_runner.go:211] docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:36:34.047594   21210 network_create.go:272] running [docker network inspect newest-cni-213632] to gather additional debugging logs...
	I1025 21:36:34.047607   21210 cli_runner.go:164] Run: docker network inspect newest-cni-213632
	W1025 21:36:34.108335   21210 cli_runner.go:211] docker network inspect newest-cni-213632 returned with exit code 1
	I1025 21:36:34.108357   21210 network_create.go:275] error running [docker network inspect newest-cni-213632]: docker network inspect newest-cni-213632: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-213632
	I1025 21:36:34.108386   21210 network_create.go:277] output of [docker network inspect newest-cni-213632]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-213632
	
	** /stderr **
	I1025 21:36:34.108479   21210 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:36:34.170493   21210 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004967f0] misses:0}
	I1025 21:36:34.170532   21210 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:34.170545   21210 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:36:34.170623   21210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	W1025 21:36:34.232467   21210 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632 returned with exit code 1
	W1025 21:36:34.232504   21210 network_create.go:107] failed to create docker network newest-cni-213632 192.168.49.0/24, will retry: subnet is taken
	I1025 21:36:34.232792   21210 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0] amended:false}} dirty:map[] misses:0}
	I1025 21:36:34.232808   21210 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:34.232999   21210 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0] amended:true}} dirty:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58] misses:0}
	I1025 21:36:34.233013   21210 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:34.233026   21210 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:36:34.233089   21210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	W1025 21:36:34.293984   21210 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632 returned with exit code 1
	W1025 21:36:34.294016   21210 network_create.go:107] failed to create docker network newest-cni-213632 192.168.58.0/24, will retry: subnet is taken
	I1025 21:36:34.294877   21210 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0] amended:true}} dirty:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58] misses:1}
	I1025 21:36:34.294917   21210 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:34.295430   21210 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0] amended:true}} dirty:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58 192.168.67.0:0xc000a94ec8] misses:1}
	I1025 21:36:34.295452   21210 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:36:34.295464   21210 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:36:34.295533   21210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	I1025 21:36:34.411887   21210 network_create.go:99] docker network newest-cni-213632 192.168.67.0/24 created
	I1025 21:36:34.411960   21210 kic.go:106] calculated static IP "192.168.67.2" for the "newest-cni-213632" container
	I1025 21:36:34.412062   21210 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:36:34.474591   21210 cli_runner.go:164] Run: docker volume create newest-cni-213632 --label name.minikube.sigs.k8s.io=newest-cni-213632 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:36:34.535773   21210 oci.go:103] Successfully created a docker volume newest-cni-213632
	I1025 21:36:34.535900   21210 cli_runner.go:164] Run: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:36:34.760146   21210 cli_runner.go:211] docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:36:34.760217   21210 client.go:171] LocalClient.Create took 776.564178ms
	I1025 21:36:36.760484   21210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:36:36.760712   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:36.825917   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:36.826016   21210 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:37.104603   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:37.170027   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:37.170111   21210 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:37.711162   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:37.780017   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:37.780102   21210 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:38.437164   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:38.502863   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:36:38.502946   21210 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:36:38.502972   21210 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:38.503025   21210 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:36:38.503083   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:38.564069   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:38.564146   21210 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:38.797705   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:38.863436   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:38.863535   21210 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:39.311058   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:39.377106   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:39.377200   21210 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:39.697144   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:39.761544   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:36:39.761625   21210 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:40.317119   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:36:40.382333   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:36:40.382425   21210 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:36:40.382444   21210 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:40.382456   21210 start.go:128] duration metric: createHost completed in 6.441536436s
	I1025 21:36:40.382463   21210 start.go:83] releasing machines lock for "newest-cni-213632", held for 6.441635281s
	W1025 21:36:40.382477   21210 start.go:603] error starting host: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I1025 21:36:40.382896   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:40.443307   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:40.443363   21210 delete.go:82] Unable to get host status for newest-cni-213632, assuming it has already been deleted: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:36:40.443517   21210 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:36:40.443525   21210 start.go:618] Will try again in 5 seconds ...
	I1025 21:36:45.445452   21210 start.go:364] acquiring machines lock for newest-cni-213632: {Name:mka889bd722023c4163394a0f9f321da2cc04c3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:36:45.445642   21210 start.go:368] acquired machines lock for "newest-cni-213632" in 146.401µs
	I1025 21:36:45.445673   21210 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:36:45.445686   21210 fix.go:55] fixHost starting: 
	I1025 21:36:45.446127   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:45.512779   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:45.512828   21210 fix.go:103] recreateIfNeeded on newest-cni-213632: state= err=unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:45.512852   21210 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:36:45.534650   21210 out.go:177] * docker "newest-cni-213632" container is missing, will recreate.
	I1025 21:36:45.579703   21210 delete.go:124] DEMOLISHING newest-cni-213632 ...
	I1025 21:36:45.579868   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:45.642162   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:36:45.642199   21210 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:45.642210   21210 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:45.642557   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:45.702197   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:45.702242   21210 delete.go:82] Unable to get host status for newest-cni-213632, assuming it has already been deleted: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:45.702316   21210 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-213632
	W1025 21:36:45.762988   21210 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-213632 returned with exit code 1
	I1025 21:36:45.763018   21210 kic.go:356] could not find the container newest-cni-213632 to remove it. will try anyways
	I1025 21:36:45.763084   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:45.824149   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:36:45.824189   21210 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:45.824273   21210 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0"
	W1025 21:36:45.886013   21210 cli_runner.go:211] docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:36:45.886037   21210 oci.go:646] error shutdown newest-cni-213632: docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:46.888415   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:46.955230   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:46.955283   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:46.955295   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:46.955313   21210 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:47.356119   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:47.420682   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:47.420720   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:47.420733   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:47.420752   21210 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:48.183665   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:48.307674   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:48.307727   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:48.307739   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:48.307765   21210 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:49.786852   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:49.850848   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:49.850887   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:49.850900   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:49.850920   21210 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:51.057214   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:51.120660   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:51.120700   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:51.120711   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:51.120737   21210 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:53.352318   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:53.417854   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:53.417896   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:53.417907   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:53.417928   21210 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:56.526334   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:36:56.595439   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:36:56.595478   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:36:56.595489   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:36:56.595508   21210 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:02.116109   21210 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:02.182136   21210 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:02.182176   21210 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:02.182188   21210 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:02.182213   21210 oci.go:88] couldn't shut down newest-cni-213632 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	 
	I1025 21:37:02.182289   21210 cli_runner.go:164] Run: docker rm -f -v newest-cni-213632
	I1025 21:37:02.245424   21210 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-213632
	W1025 21:37:02.307421   21210 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-213632 returned with exit code 1
	I1025 21:37:02.307513   21210 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:37:02.368976   21210 cli_runner.go:164] Run: docker network rm newest-cni-213632
	W1025 21:37:02.479122   21210 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:37:02.479140   21210 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:37:03.481309   21210 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:37:03.503551   21210 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:37:03.503716   21210 start.go:159] libmachine.API.Create for "newest-cni-213632" (driver="docker")
	I1025 21:37:03.503760   21210 client.go:168] LocalClient.Create starting
	I1025 21:37:03.503920   21210 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:37:03.503986   21210 main.go:134] libmachine: Decoding PEM data...
	I1025 21:37:03.504010   21210 main.go:134] libmachine: Parsing certificate...
	I1025 21:37:03.504087   21210 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:37:03.504135   21210 main.go:134] libmachine: Decoding PEM data...
	I1025 21:37:03.504151   21210 main.go:134] libmachine: Parsing certificate...
	I1025 21:37:03.504987   21210 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:37:03.569221   21210 cli_runner.go:211] docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:37:03.569290   21210 network_create.go:272] running [docker network inspect newest-cni-213632] to gather additional debugging logs...
	I1025 21:37:03.569301   21210 cli_runner.go:164] Run: docker network inspect newest-cni-213632
	W1025 21:37:03.629795   21210 cli_runner.go:211] docker network inspect newest-cni-213632 returned with exit code 1
	I1025 21:37:03.629817   21210 network_create.go:275] error running [docker network inspect newest-cni-213632]: docker network inspect newest-cni-213632: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-213632
	I1025 21:37:03.629833   21210 network_create.go:277] output of [docker network inspect newest-cni-213632]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-213632
	
	** /stderr **
	I1025 21:37:03.629916   21210 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:37:03.691271   21210 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0] amended:true}} dirty:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58 192.168.67.0:0xc000a94ec8] misses:1}
	I1025 21:37:03.691299   21210 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:03.691504   21210 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0] amended:true}} dirty:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58 192.168.67.0:0xc000a94ec8] misses:2}
	I1025 21:37:03.691512   21210 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:03.691699   21210 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58 192.168.67.0:0xc000a94ec8] amended:false}} dirty:map[] misses:0}
	I1025 21:37:03.691708   21210 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:03.691920   21210 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58 192.168.67.0:0xc000a94ec8] amended:true}} dirty:map[192.168.49.0:0xc0004967f0 192.168.58.0:0xc000128a58 192.168.67.0:0xc000a94ec8 192.168.76.0:0xc000a95430] misses:0}
	I1025 21:37:03.691938   21210 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:03.691945   21210 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:37:03.692013   21210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	I1025 21:37:03.781415   21210 network_create.go:99] docker network newest-cni-213632 192.168.76.0/24 created
	I1025 21:37:03.781455   21210 kic.go:106] calculated static IP "192.168.76.2" for the "newest-cni-213632" container
	I1025 21:37:03.781561   21210 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:37:03.844093   21210 cli_runner.go:164] Run: docker volume create newest-cni-213632 --label name.minikube.sigs.k8s.io=newest-cni-213632 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:37:03.904477   21210 oci.go:103] Successfully created a docker volume newest-cni-213632
	I1025 21:37:03.904580   21210 cli_runner.go:164] Run: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:37:04.037074   21210 cli_runner.go:211] docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:37:04.037124   21210 client.go:171] LocalClient.Create took 533.341151ms
	I1025 21:37:06.039542   21210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:37:06.039643   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:06.104540   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:06.104620   21210 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:06.305318   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:06.368753   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:06.368834   21210 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:06.811980   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:06.877863   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:06.877966   21210 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:07.282721   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:07.344050   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:07.344134   21210 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:07.939823   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:08.030928   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:08.031034   21210 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:08.031058   21210 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:08.031115   21210 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:37:08.031158   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:08.091629   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:08.091713   21210 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:08.361767   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:08.425491   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:08.425589   21210 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:08.938284   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:09.006823   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:09.006919   21210 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:09.455404   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:09.520069   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:09.520162   21210 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:09.520181   21210 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:09.520192   21210 start.go:128] duration metric: createHost completed in 6.038819959s
	I1025 21:37:09.520251   21210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:37:09.520290   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:09.580785   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:09.580862   21210 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:09.895442   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:09.960266   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:09.960362   21210 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:10.226644   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:10.292910   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:10.292988   21210 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:11.063337   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:11.126661   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:11.126743   21210 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:11.126760   21210 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:11.126818   21210 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:37:11.126857   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:11.188098   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:11.188185   21210 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:11.446614   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:11.514080   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:11.514166   21210 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:11.714540   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:11.779560   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:11.779646   21210 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:12.152332   21210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:12.217003   21210 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:12.217103   21210 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:12.217122   21210 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:12.217133   21210 fix.go:57] fixHost completed within 26.771362284s
	I1025 21:37:12.217141   21210 start.go:83] releasing machines lock for "newest-cni-213632", held for 26.771401439s
	W1025 21:37:12.217304   21210 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-213632" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-213632" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:37:12.260954   21210 out.go:177] 
	W1025 21:37:12.282662   21210 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:37:12.282682   21210 out.go:239] * 
	* 
	W1025 21:37:12.283280   21210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:37:12.368905   21210 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p newest-cni-213632 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "6e6615972d37463ed871e7d5a747394856acf3e9e4b0ec80da3cd5adc541295f",
	        "Created": "2022-10-26T04:37:03.767577603Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (113.334724ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:37:12.580857   21412 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (39.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-213632 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p newest-cni-213632 --alsologtostderr -v=3: exit status 82 (14.678791948s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-213632"  ...
	* Stopping node "newest-cni-213632"  ...
	* Stopping node "newest-cni-213632"  ...
	* Stopping node "newest-cni-213632"  ...
	* Stopping node "newest-cni-213632"  ...
	* Stopping node "newest-cni-213632"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:37:12.863478   21420 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:37:12.863656   21420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:37:12.863661   21420 out.go:309] Setting ErrFile to fd 2...
	I1025 21:37:12.863665   21420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:37:12.863767   21420 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:37:12.864069   21420 out.go:303] Setting JSON to false
	I1025 21:37:12.864213   21420 mustload.go:65] Loading cluster: newest-cni-213632
	I1025 21:37:12.864495   21420 config.go:180] Loaded profile config "newest-cni-213632": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:37:12.864554   21420 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/newest-cni-213632/config.json ...
	I1025 21:37:12.864823   21420 mustload.go:65] Loading cluster: newest-cni-213632
	I1025 21:37:12.864915   21420 config.go:180] Loaded profile config "newest-cni-213632": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:37:12.864945   21420 stop.go:39] StopHost: newest-cni-213632
	I1025 21:37:12.886642   21420 out.go:177] * Stopping node "newest-cni-213632"  ...
	I1025 21:37:12.930675   21420 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:12.992441   21420 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:12.992526   21420 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:37:12.992549   21420 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:12.992571   21420 retry.go:31] will retry after 1.104660288s: docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:14.098515   21420 stop.go:39] StopHost: newest-cni-213632
	I1025 21:37:14.122098   21420 out.go:177] * Stopping node "newest-cni-213632"  ...
	I1025 21:37:14.143955   21420 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:14.206275   21420 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:14.206320   21420 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:37:14.206339   21420 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:14.206367   21420 retry.go:31] will retry after 2.160763633s: docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:16.369275   21420 stop.go:39] StopHost: newest-cni-213632
	I1025 21:37:16.391762   21420 out.go:177] * Stopping node "newest-cni-213632"  ...
	I1025 21:37:16.435718   21420 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:16.500993   21420 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:16.501035   21420 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:37:16.501052   21420 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:16.501067   21420 retry.go:31] will retry after 2.62026012s: docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:19.121573   21420 stop.go:39] StopHost: newest-cni-213632
	I1025 21:37:19.144066   21420 out.go:177] * Stopping node "newest-cni-213632"  ...
	I1025 21:37:19.165967   21420 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:19.228451   21420 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:19.228486   21420 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:37:19.228511   21420 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:19.228526   21420 retry.go:31] will retry after 3.164785382s: docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:22.395436   21420 stop.go:39] StopHost: newest-cni-213632
	I1025 21:37:22.417784   21420 out.go:177] * Stopping node "newest-cni-213632"  ...
	I1025 21:37:22.461876   21420 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:22.525589   21420 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:22.525624   21420 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:37:22.525641   21420 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:22.525656   21420 retry.go:31] will retry after 4.680977329s: docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:27.208812   21420 stop.go:39] StopHost: newest-cni-213632
	I1025 21:37:27.231343   21420 out.go:177] * Stopping node "newest-cni-213632"  ...
	I1025 21:37:27.275075   21420 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:27.340156   21420 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:27.340192   21420 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	W1025 21:37:27.340210   21420 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:27.361818   21420 out.go:177] 
	W1025 21:37:27.383847   21420 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-213632 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-213632 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:27.383875   21420 out.go:239] * 
	* 
	W1025 21:37:27.387775   21420 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:37:27.447654   21420 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p newest-cni-213632 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "6e6615972d37463ed871e7d5a747394856acf3e9e4b0ec80da3cd5adc541295f",
	        "Created": "2022-10-26T04:37:03.767577603Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (112.742127ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:37:27.669098   21448 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (14.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (113.278018ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:37:27.782671   21452 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-213632 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "6e6615972d37463ed871e7d5a747394856acf3e9e4b0ec80da3cd5adc541295f",
	        "Created": "2022-10-26T04:37:03.767577603Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (110.718173ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:37:28.228225   21462 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (61.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-213632 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-213632 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: exit status 80 (1m1.640248721s)

                                                
                                                
-- stdout --
	* [newest-cni-213632] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-213632 in cluster newest-cni-213632
	* Pulling base image ...
	* docker "newest-cni-213632" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-213632" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:37:28.279588   21466 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:37:28.279747   21466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:37:28.279752   21466 out.go:309] Setting ErrFile to fd 2...
	I1025 21:37:28.279756   21466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:37:28.279881   21466 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:37:28.280347   21466 out.go:303] Setting JSON to false
	I1025 21:37:28.295086   21466 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5817,"bootTime":1666753231,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 21:37:28.295183   21466 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 21:37:28.316899   21466 out.go:177] * [newest-cni-213632] minikube v1.27.1 on Darwin 12.6
	I1025 21:37:28.360146   21466 notify.go:220] Checking for updates...
	I1025 21:37:28.381699   21466 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 21:37:28.402912   21466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 21:37:28.425057   21466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 21:37:28.446620   21466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:37:28.467918   21466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 21:37:28.489625   21466 config.go:180] Loaded profile config "newest-cni-213632": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:37:28.490242   21466 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 21:37:28.556848   21466 docker.go:137] docker version: linux-20.10.17
	I1025 21:37:28.556966   21466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:37:28.686599   21466 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:37:28.629302239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:37:28.729726   21466 out.go:177] * Using the docker driver based on existing profile
	I1025 21:37:28.750892   21466 start.go:282] selected driver: docker
	I1025 21:37:28.750929   21466 start.go:808] validating driver "docker" against &{Name:newest-cni-213632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-213632 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:37:28.751126   21466 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:37:28.754508   21466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:37:28.882856   21466 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:39 SystemTime:2022-10-26 04:37:28.826549001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 21:37:28.883023   21466 start_flags.go:907] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 21:37:28.883040   21466 cni.go:95] Creating CNI manager for ""
	I1025 21:37:28.883049   21466 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:37:28.883061   21466 start_flags.go:317] config:
	{Name:newest-cni-213632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-213632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 21:37:28.925792   21466 out.go:177] * Starting control plane node newest-cni-213632 in cluster newest-cni-213632
	I1025 21:37:28.946921   21466 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 21:37:28.968707   21466 out.go:177] * Pulling base image ...
	I1025 21:37:29.010700   21466 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 21:37:29.010706   21466 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 21:37:29.010862   21466 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 21:37:29.010884   21466 cache.go:57] Caching tarball of preloaded images
	I1025 21:37:29.011076   21466 preload.go:174] Found /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 21:37:29.011092   21466 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 21:37:29.012118   21466 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/newest-cni-213632/config.json ...
	I1025 21:37:29.075245   21466 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 21:37:29.075346   21466 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 21:37:29.075356   21466 cache.go:208] Successfully downloaded all kic artifacts
	I1025 21:37:29.075423   21466 start.go:364] acquiring machines lock for newest-cni-213632: {Name:mka889bd722023c4163394a0f9f321da2cc04c3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:37:29.075501   21466 start.go:368] acquired machines lock for "newest-cni-213632" in 57.168µs
	I1025 21:37:29.075518   21466 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:37:29.075526   21466 fix.go:55] fixHost starting: 
	I1025 21:37:29.075741   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:29.136402   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:29.136456   21466 fix.go:103] recreateIfNeeded on newest-cni-213632: state= err=unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:29.136476   21466 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:37:29.158195   21466 out.go:177] * docker "newest-cni-213632" container is missing, will recreate.
	I1025 21:37:29.179208   21466 delete.go:124] DEMOLISHING newest-cni-213632 ...
	I1025 21:37:29.179406   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:29.241963   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:29.241999   21466 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:29.242011   21466 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:29.242360   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:29.303346   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:29.303391   21466 delete.go:82] Unable to get host status for newest-cni-213632, assuming it has already been deleted: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:29.303459   21466 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-213632
	W1025 21:37:29.362738   21466 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-213632 returned with exit code 1
	I1025 21:37:29.362765   21466 kic.go:356] could not find the container newest-cni-213632 to remove it. will try anyways
	I1025 21:37:29.362841   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:29.423692   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:37:29.423731   21466 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:29.423820   21466 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0"
	W1025 21:37:29.484082   21466 cli_runner.go:211] docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:37:29.484113   21466 oci.go:646] error shutdown newest-cni-213632: docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:30.485641   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:30.549662   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:30.549703   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:30.549711   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:30.549738   21466 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:31.104444   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:31.169341   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:31.169390   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:31.169407   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:31.169426   21466 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:32.252204   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:32.317427   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:32.317464   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:32.317473   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:32.317498   21466 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:33.630028   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:33.693807   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:33.693844   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:33.693865   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:33.693885   21466 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:35.276541   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:35.339398   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:35.339437   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:35.339445   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:35.339471   21466 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:37.682386   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:37.746560   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:37.746608   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:37.746621   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:37.746640   21466 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:42.255157   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:42.319432   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:42.319473   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:42.319484   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:42.319504   21466 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:45.543385   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:45.609802   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:45.609841   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:45.609849   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:37:45.609906   21466 oci.go:88] couldn't shut down newest-cni-213632 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	 
	I1025 21:37:45.609974   21466 cli_runner.go:164] Run: docker rm -f -v newest-cni-213632
	I1025 21:37:45.673221   21466 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-213632
	W1025 21:37:45.733449   21466 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-213632 returned with exit code 1
	I1025 21:37:45.733586   21466 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:37:45.795375   21466 cli_runner.go:164] Run: docker network rm newest-cni-213632
	W1025 21:37:45.915925   21466 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:37:45.915945   21466 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:37:46.916140   21466 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:37:46.938419   21466 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:37:46.938645   21466 start.go:159] libmachine.API.Create for "newest-cni-213632" (driver="docker")
	I1025 21:37:46.938690   21466 client.go:168] LocalClient.Create starting
	I1025 21:37:46.938860   21466 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:37:46.938929   21466 main.go:134] libmachine: Decoding PEM data...
	I1025 21:37:46.938957   21466 main.go:134] libmachine: Parsing certificate...
	I1025 21:37:46.939057   21466 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:37:46.939102   21466 main.go:134] libmachine: Decoding PEM data...
	I1025 21:37:46.939122   21466 main.go:134] libmachine: Parsing certificate...
	I1025 21:37:46.960669   21466 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:37:47.024773   21466 cli_runner.go:211] docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:37:47.024861   21466 network_create.go:272] running [docker network inspect newest-cni-213632] to gather additional debugging logs...
	I1025 21:37:47.024880   21466 cli_runner.go:164] Run: docker network inspect newest-cni-213632
	W1025 21:37:47.086018   21466 cli_runner.go:211] docker network inspect newest-cni-213632 returned with exit code 1
	I1025 21:37:47.086044   21466 network_create.go:275] error running [docker network inspect newest-cni-213632]: docker network inspect newest-cni-213632: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-213632
	I1025 21:37:47.086063   21466 network_create.go:277] output of [docker network inspect newest-cni-213632]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-213632
	
	** /stderr **
	I1025 21:37:47.086134   21466 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:37:47.148410   21466 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8] misses:0}
	I1025 21:37:47.148456   21466 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:47.148471   21466 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:37:47.148549   21466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	W1025 21:37:47.209319   21466 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632 returned with exit code 1
	W1025 21:37:47.209357   21466 network_create.go:107] failed to create docker network newest-cni-213632 192.168.49.0/24, will retry: subnet is taken
	I1025 21:37:47.209628   21466 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8] amended:false}} dirty:map[] misses:0}
	I1025 21:37:47.209647   21466 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:47.209894   21466 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058] misses:0}
	I1025 21:37:47.209910   21466 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:47.209920   21466 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:37:47.209986   21466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	W1025 21:37:47.271309   21466 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632 returned with exit code 1
	W1025 21:37:47.271343   21466 network_create.go:107] failed to create docker network newest-cni-213632 192.168.58.0/24, will retry: subnet is taken
	I1025 21:37:47.271631   21466 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058] misses:1}
	I1025 21:37:47.271650   21466 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:47.271854   21466 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058 192.168.67.0:0xc0006ec100] misses:1}
	I1025 21:37:47.271865   21466 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:37:47.271872   21466 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 21:37:47.271935   21466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	I1025 21:37:47.361653   21466 network_create.go:99] docker network newest-cni-213632 192.168.67.0/24 created
	I1025 21:37:47.361693   21466 kic.go:106] calculated static IP "192.168.67.2" for the "newest-cni-213632" container
	I1025 21:37:47.361778   21466 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:37:47.424171   21466 cli_runner.go:164] Run: docker volume create newest-cni-213632 --label name.minikube.sigs.k8s.io=newest-cni-213632 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:37:47.484955   21466 oci.go:103] Successfully created a docker volume newest-cni-213632
	I1025 21:37:47.485079   21466 cli_runner.go:164] Run: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:37:47.618096   21466 cli_runner.go:211] docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:37:47.618159   21466 client.go:171] LocalClient.Create took 679.45755ms
	I1025 21:37:49.618415   21466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:37:49.618492   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:49.683412   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:49.683497   21466 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:49.835098   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:49.899934   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:49.900025   21466 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:50.202734   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:50.267467   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:50.267556   21466 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:50.841029   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:50.907648   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:50.907749   21466 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:50.907765   21466 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:50.907813   21466 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:37:50.907877   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:50.967847   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:50.967955   21466 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:51.148806   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:51.213435   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:51.213518   21466 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:51.546078   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:51.610892   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:51.610974   21466 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:52.073522   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:52.136389   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:52.136478   21466 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:52.136500   21466 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:52.136508   21466 start.go:128] duration metric: createHost completed in 5.220330811s
	I1025 21:37:52.136604   21466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:37:52.136655   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:52.197252   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:52.197350   21466 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:52.395512   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:52.460020   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:52.460099   21466 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:52.759883   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:52.822779   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:52.822856   21466 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:53.486387   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:53.549150   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:53.549244   21466 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:53.549261   21466 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:53.549321   21466 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:37:53.549362   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:53.610370   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:53.610450   21466 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:53.788613   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:53.851368   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:53.851466   21466 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:54.174586   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:54.238762   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:37:54.238855   21466 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:54.843303   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:37:54.908739   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:37:54.908825   21466 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:37:54.908847   21466 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:54.908856   21466 fix.go:57] fixHost completed within 25.833249022s
	I1025 21:37:54.908864   21466 start.go:83] releasing machines lock for "newest-cni-213632", held for 25.833273605s
	W1025 21:37:54.908877   21466 start.go:603] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W1025 21:37:54.909057   21466 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:37:54.909068   21466 start.go:618] Will try again in 5 seconds ...
	I1025 21:37:59.911294   21466 start.go:364] acquiring machines lock for newest-cni-213632: {Name:mka889bd722023c4163394a0f9f321da2cc04c3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:37:59.911543   21466 start.go:368] acquired machines lock for "newest-cni-213632" in 210.061µs
	I1025 21:37:59.911636   21466 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:37:59.911646   21466 fix.go:55] fixHost starting: 
	I1025 21:37:59.911997   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:37:59.976897   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:37:59.976949   21466 fix.go:103] recreateIfNeeded on newest-cni-213632: state= err=unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:37:59.976962   21466 fix.go:108] machineExists: false. err=machine does not exist
	I1025 21:38:00.021732   21466 out.go:177] * docker "newest-cni-213632" container is missing, will recreate.
	I1025 21:38:00.043602   21466 delete.go:124] DEMOLISHING newest-cni-213632 ...
	I1025 21:38:00.043818   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:00.105702   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:38:00.105743   21466 stop.go:75] unable to get state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:00.105755   21466 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:00.106141   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:00.167086   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:00.167159   21466 delete.go:82] Unable to get host status for newest-cni-213632, assuming it has already been deleted: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:00.167241   21466 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-213632
	W1025 21:38:00.227330   21466 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-213632 returned with exit code 1
	I1025 21:38:00.227357   21466 kic.go:356] could not find the container newest-cni-213632 to remove it. will try anyways
	I1025 21:38:00.227428   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:00.300083   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	W1025 21:38:00.300120   21466 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:00.300187   21466 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0"
	W1025 21:38:00.360987   21466 cli_runner.go:211] docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 21:38:00.361012   21466 oci.go:646] error shutdown newest-cni-213632: docker exec --privileged -t newest-cni-213632 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:01.363374   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:01.426414   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:01.426453   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:01.426464   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:01.426482   21466 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:01.825457   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:01.891082   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:01.891124   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:01.891140   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:01.891158   21466 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:02.491426   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:02.554317   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:02.554365   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:02.554375   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:02.554399   21466 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:03.963741   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:04.026220   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:04.026283   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:04.026296   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:04.026316   21466 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:05.219044   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:05.285447   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:05.285490   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:05.285502   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:05.285521   21466 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:08.741836   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:08.808138   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:08.808196   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:08.808207   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:08.808231   21466 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:13.352327   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:13.418233   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:13.418280   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:13.418291   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:13.418317   21466 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:19.251671   21466 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:19.316816   21466 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:19.316880   21466 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:19.316893   21466 oci.go:660] temporary error: container newest-cni-213632 status is  but expect it to be exited
	I1025 21:38:19.316916   21466 oci.go:88] couldn't shut down newest-cni-213632 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	 
	I1025 21:38:19.316983   21466 cli_runner.go:164] Run: docker rm -f -v newest-cni-213632
	I1025 21:38:19.380166   21466 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-213632
	W1025 21:38:19.441464   21466 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-213632 returned with exit code 1
	I1025 21:38:19.441560   21466 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:38:19.502349   21466 cli_runner.go:164] Run: docker network rm newest-cni-213632
	W1025 21:38:19.605348   21466 delete.go:139] delete failed (probably ok) <nil>
	I1025 21:38:19.605365   21466 fix.go:115] Sleeping 1 second for extra luck!
	I1025 21:38:20.606285   21466 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:38:20.628611   21466 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:38:20.628837   21466 start.go:159] libmachine.API.Create for "newest-cni-213632" (driver="docker")
	I1025 21:38:20.628863   21466 client.go:168] LocalClient.Create starting
	I1025 21:38:20.629040   21466 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/ca.pem
	I1025 21:38:20.629109   21466 main.go:134] libmachine: Decoding PEM data...
	I1025 21:38:20.629133   21466 main.go:134] libmachine: Parsing certificate...
	I1025 21:38:20.629222   21466 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/14956-2080/.minikube/certs/cert.pem
	I1025 21:38:20.629265   21466 main.go:134] libmachine: Decoding PEM data...
	I1025 21:38:20.629280   21466 main.go:134] libmachine: Parsing certificate...
	I1025 21:38:20.629912   21466 cli_runner.go:164] Run: docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:38:20.693948   21466 cli_runner.go:211] docker network inspect newest-cni-213632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:38:20.694040   21466 network_create.go:272] running [docker network inspect newest-cni-213632] to gather additional debugging logs...
	I1025 21:38:20.694057   21466 cli_runner.go:164] Run: docker network inspect newest-cni-213632
	W1025 21:38:20.754093   21466 cli_runner.go:211] docker network inspect newest-cni-213632 returned with exit code 1
	I1025 21:38:20.754137   21466 network_create.go:275] error running [docker network inspect newest-cni-213632]: docker network inspect newest-cni-213632: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-213632
	I1025 21:38:20.754153   21466 network_create.go:277] output of [docker network inspect newest-cni-213632]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-213632
	
	** /stderr **
	I1025 21:38:20.754218   21466 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:38:20.815428   21466 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058 192.168.67.0:0xc0006ec100] misses:1}
	I1025 21:38:20.815456   21466 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:38:20.815669   21466 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058 192.168.67.0:0xc0006ec100] misses:2}
	I1025 21:38:20.815678   21466 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:38:20.815868   21466 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058 192.168.67.0:0xc0006ec100] amended:false}} dirty:map[] misses:0}
	I1025 21:38:20.815876   21466 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:38:20.816058   21466 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058 192.168.67.0:0xc0006ec100] amended:true}} dirty:map[192.168.49.0:0xc0006ec0a8 192.168.58.0:0xc000530058 192.168.67.0:0xc0006ec100 192.168.76.0:0xc000530330] misses:0}
	I1025 21:38:20.816077   21466 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 21:38:20.816084   21466 network_create.go:115] attempt to create docker network newest-cni-213632 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 21:38:20.816149   21466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-213632 newest-cni-213632
	I1025 21:38:20.906703   21466 network_create.go:99] docker network newest-cni-213632 192.168.76.0/24 created
	I1025 21:38:20.906731   21466 kic.go:106] calculated static IP "192.168.76.2" for the "newest-cni-213632" container
	I1025 21:38:20.906852   21466 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:38:20.968969   21466 cli_runner.go:164] Run: docker volume create newest-cni-213632 --label name.minikube.sigs.k8s.io=newest-cni-213632 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:38:21.029733   21466 oci.go:103] Successfully created a docker volume newest-cni-213632
	I1025 21:38:21.029869   21466 cli_runner.go:164] Run: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	W1025 21:38:21.162831   21466 cli_runner.go:211] docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib returned with exit code 125
	I1025 21:38:21.162902   21466 client.go:171] LocalClient.Create took 534.031394ms
	I1025 21:38:23.164784   21466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:38:23.164877   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:23.229422   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:23.229526   21466 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:23.396504   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:23.460372   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:23.460462   21466 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:23.877988   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:23.942668   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:23.942750   21466 retry.go:31] will retry after 829.823411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:24.774956   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:24.838607   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:38:24.838697   21466 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:38:24.838716   21466 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:24.838770   21466 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:38:24.838813   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:24.899562   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:24.899642   21466 retry.go:31] will retry after 273.70215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:25.174130   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:25.238550   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:25.238669   21466 retry.go:31] will retry after 209.670244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:25.450026   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:25.517668   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:25.517762   21466 retry.go:31] will retry after 670.513831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:26.188784   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:26.253839   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:38:26.253943   21466 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:38:26.253977   21466 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:26.253985   21466 start.go:128] duration metric: createHost completed in 5.647634594s
	I1025 21:38:26.254049   21466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:38:26.254094   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:26.314005   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:26.314089   21466 retry.go:31] will retry after 168.316559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:26.484719   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:26.547679   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:26.547766   21466 retry.go:31] will retry after 390.412446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:26.940589   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:27.005215   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:27.005296   21466 retry.go:31] will retry after 587.33751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:27.595015   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:27.659950   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:38:27.660042   21466 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:38:27.660080   21466 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:27.660125   21466 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:38:27.660168   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:27.721162   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:27.721278   21466 retry.go:31] will retry after 230.78805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:27.954394   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:28.019996   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:28.020102   21466 retry.go:31] will retry after 386.469643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:28.408980   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:28.473997   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:28.474099   21466 retry.go:31] will retry after 423.866531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:28.900346   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:28.965253   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	I1025 21:38:28.965337   21466 retry.go:31] will retry after 659.880839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:29.627371   21466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632
	W1025 21:38:29.691942   21466 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632 returned with exit code 1
	W1025 21:38:29.692050   21466 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:38:29.692079   21466 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-213632": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-213632: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	I1025 21:38:29.692090   21466 fix.go:57] fixHost completed within 29.780349119s
	I1025 21:38:29.692097   21466 start.go:83] releasing machines lock for "newest-cni-213632", held for 29.78044758s
	W1025 21:38:29.692273   21466 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-213632" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-213632" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I1025 21:38:29.737042   21466 out.go:177] 
	W1025 21:38:29.759126   21466 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-213632 container: docker run --rm --name newest-cni-213632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-213632 --entrypoint /usr/bin/test -v newest-cni-213632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W1025 21:38:29.759153   21466 out.go:239] * 
	* 
	W1025 21:38:29.760320   21466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:38:29.845764   21466 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p newest-cni-213632 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "64e41498df26e3abcb3c83f09646db22f2361871326590970891401f0ae10c43",
	        "Created": "2022-10-26T04:38:20.882736729Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (114.653147ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:38:30.064005   21711 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (61.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-213632 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p newest-cni-213632 "sudo crictl images -o json": exit status 80 (205.019617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p newest-cni-213632 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.3",
- 	"registry.k8s.io/kube-controller-manager:v1.25.3",
- 	"registry.k8s.io/kube-proxy:v1.25.3",
- 	"registry.k8s.io/kube-scheduler:v1.25.3",
- 	"registry.k8s.io/pause:3.8",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "64e41498df26e3abcb3c83f09646db22f2361871326590970891401f0ae10c43",
	        "Created": "2022-10-26T04:38:20.882736729Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (113.002253ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:38:30.447688   21721 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-213632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p newest-cni-213632 --alsologtostderr -v=1: exit status 80 (202.751311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:38:30.499252   21725 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:38:30.499425   21725 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:38:30.499431   21725 out.go:309] Setting ErrFile to fd 2...
	I1025 21:38:30.499435   21725 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:38:30.499562   21725 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 21:38:30.499858   21725 out.go:303] Setting JSON to false
	I1025 21:38:30.499874   21725 mustload.go:65] Loading cluster: newest-cni-213632
	I1025 21:38:30.500155   21725 config.go:180] Loaded profile config "newest-cni-213632": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 21:38:30.500515   21725 cli_runner.go:164] Run: docker container inspect newest-cni-213632 --format={{.State.Status}}
	W1025 21:38:30.561125   21725 cli_runner.go:211] docker container inspect newest-cni-213632 --format={{.State.Status}} returned with exit code 1
	I1025 21:38:30.583502   21725 out.go:177] 
	W1025 21:38:30.605366   21725 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632
	
	W1025 21:38:30.605393   21725 out.go:239] * 
	* 
	W1025 21:38:30.608169   21725 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:38:30.629127   21725 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p newest-cni-213632 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "64e41498df26e3abcb3c83f09646db22f2361871326590970891401f0ae10c43",
	        "Created": "2022-10-26T04:38:20.882736729Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (112.650477ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:38:30.828333   21733 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-213632
helpers_test.go:235: (dbg) docker inspect newest-cni-213632:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-213632",
	        "Id": "64e41498df26e3abcb3c83f09646db22f2361871326590970891401f0ae10c43",
	        "Created": "2022-10-26T04:38:20.882736729Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-213632"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-213632 -n newest-cni-213632: exit status 7 (111.961279ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:38:31.004363   21739 status.go:249] status error: host: state: unknown state "newest-cni-213632": docker container inspect newest-cni-213632 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-213632

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-213632" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.56s)

                                                
                                    

Test pass (153/246)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.76
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.25.3/json-events 7.51
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.73
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.42
18 TestDownloadOnlyKic 14.6
19 TestBinaryMirror 1.66
20 TestOffline 50.57
22 TestAddons/Setup 128.34
26 TestAddons/parallel/MetricsServer 5.56
27 TestAddons/parallel/HelmTiller 11.18
29 TestAddons/parallel/CSI 41.81
30 TestAddons/parallel/Headlamp 10.28
32 TestAddons/serial/GCPAuth 16.17
33 TestAddons/StoppedEnableDisable 12.92
40 TestHyperKitDriverInstallOrUpdate 8.78
43 TestErrorSpam/setup 26.29
44 TestErrorSpam/start 2.14
45 TestErrorSpam/status 1.3
46 TestErrorSpam/pause 1.84
47 TestErrorSpam/unpause 1.93
48 TestErrorSpam/stop 13.1
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 43.09
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 51.54
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 0.05
59 TestFunctional/serial/CacheCmd/cache/add_remote 6.05
60 TestFunctional/serial/CacheCmd/cache/add_local 1.85
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.08
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.58
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.5
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.64
68 TestFunctional/serial/ExtraConfig 52.78
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 2.94
71 TestFunctional/serial/LogsFileCmd 3.01
73 TestFunctional/parallel/ConfigCmd 0.58
74 TestFunctional/parallel/DashboardCmd 13.2
75 TestFunctional/parallel/DryRun 1.3
76 TestFunctional/parallel/InternationalLanguage 0.64
77 TestFunctional/parallel/StatusCmd 1.3
80 TestFunctional/parallel/ServiceCmd 15.27
82 TestFunctional/parallel/AddonsCmd 0.64
83 TestFunctional/parallel/PersistentVolumeClaim 25.33
85 TestFunctional/parallel/SSHCmd 1.04
86 TestFunctional/parallel/CpCmd 1.75
87 TestFunctional/parallel/MySQL 21.31
88 TestFunctional/parallel/FileSync 0.42
89 TestFunctional/parallel/CertSync 2.52
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
97 TestFunctional/parallel/License 0.63
98 TestFunctional/parallel/Version/short 0.1
99 TestFunctional/parallel/Version/components 0.94
100 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
101 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
102 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
103 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
104 TestFunctional/parallel/ImageCommands/ImageBuild 3.24
105 TestFunctional/parallel/ImageCommands/Setup 2.57
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.01
107 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.41
108 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.68
109 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.11
110 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
111 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.53
112 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.51
113 TestFunctional/parallel/DockerEnv/bash 1.67
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.43
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.39
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.19
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/MountCmd/any-port 11.18
128 TestFunctional/parallel/MountCmd/specific-port 2.77
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
130 TestFunctional/parallel/ProfileCmd/profile_list 0.52
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
132 TestFunctional/delete_addon-resizer_images 0.16
133 TestFunctional/delete_my-image_image 0.07
134 TestFunctional/delete_minikube_cached_images 0.07
144 TestJSONOutput/start/Command 42.98
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.7
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.61
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 12.31
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.76
169 TestKicCustomNetwork/create_custom_network 29.78
170 TestKicCustomNetwork/use_default_bridge_network 28.42
171 TestKicExistingNetwork 29.6
172 TestKicCustomSubnet 30.7
173 TestMainNoArgs 0.07
174 TestMinikubeProfile 60.96
177 TestMountStart/serial/StartWithMountFirst 7.42
178 TestMountStart/serial/VerifyMountFirst 0.42
179 TestMountStart/serial/StartWithMountSecond 7.47
180 TestMountStart/serial/VerifyMountSecond 0.42
181 TestMountStart/serial/DeleteFirst 2.19
182 TestMountStart/serial/VerifyMountPostDelete 0.41
183 TestMountStart/serial/Stop 1.62
184 TestMountStart/serial/RestartStopped 5.09
185 TestMountStart/serial/VerifyMountPostStop 0.42
188 TestMultiNode/serial/FreshStart2Nodes 75.55
189 TestMultiNode/serial/DeployApp2Nodes 4.9
190 TestMultiNode/serial/PingHostFrom2Pods 0.88
191 TestMultiNode/serial/AddNode 36.08
192 TestMultiNode/serial/ProfileList 0.5
193 TestMultiNode/serial/CopyFile 15.25
194 TestMultiNode/serial/StopNode 14.01
195 TestMultiNode/serial/StartAfterStop 19.44
196 TestMultiNode/serial/RestartKeepsNodes 109.59
197 TestMultiNode/serial/DeleteNode 17.01
198 TestMultiNode/serial/StopMultiNode 24.95
200 TestMultiNode/serial/ValidateNameConflict 31.61
204 TestPreload 136.13
206 TestScheduledStopUnix 101.29
207 TestSkaffold 56.7
209 TestInsufficientStorage 13.6
225 TestStoppedBinaryUpgrade/Setup 1.44
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.35
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
243 TestNoKubernetes/serial/ProfileList 7.27
246 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.51
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.73
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.23
308 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (15.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-201722 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-201722 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (15.762568672s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-201722
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-201722: exit status 85 (291.575817ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-201722 | jenkins | v1.27.1 | 25 Oct 22 20:17 PDT |          |
	|         | -p download-only-201722        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 20:17:22
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 20:17:22.462441    2918 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:17:22.462599    2918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:17:22.462604    2918 out.go:309] Setting ErrFile to fd 2...
	I1025 20:17:22.462608    2918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:17:22.462725    2918 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	W1025 20:17:22.462827    2918 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/14956-2080/.minikube/config/config.json: open /Users/jenkins/minikube-integration/14956-2080/.minikube/config/config.json: no such file or directory
	I1025 20:17:22.463501    2918 out.go:303] Setting JSON to true
	I1025 20:17:22.478222    2918 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1011,"bootTime":1666753231,"procs":344,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:17:22.478353    2918 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:17:22.500700    2918 out.go:97] [download-only-201722] minikube v1.27.1 on Darwin 12.6
	I1025 20:17:22.500791    2918 notify.go:220] Checking for updates...
	W1025 20:17:22.500817    2918 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 20:17:22.521568    2918 out.go:169] MINIKUBE_LOCATION=14956
	I1025 20:17:22.563661    2918 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:17:22.589663    2918 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:17:22.611943    2918 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:17:22.633917    2918 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	W1025 20:17:22.676724    2918 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 20:17:22.677075    2918 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:17:22.743238    2918 docker.go:137] docker version: linux-20.10.17
	I1025 20:17:22.743353    2918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:17:22.874210    2918 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-10-26 03:17:22.808287041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:17:22.895999    2918 out.go:97] Using the docker driver based on user configuration
	I1025 20:17:22.896045    2918 start.go:282] selected driver: docker
	I1025 20:17:22.896096    2918 start.go:808] validating driver "docker" against <nil>
	I1025 20:17:22.896266    2918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:17:23.027531    2918 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-10-26 03:17:22.963656406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:17:23.027651    2918 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 20:17:23.031941    2918 start_flags.go:384] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I1025 20:17:23.032067    2918 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 20:17:23.053783    2918 out.go:169] Using Docker Desktop driver with root privileges
	I1025 20:17:23.075845    2918 cni.go:95] Creating CNI manager for ""
	I1025 20:17:23.075873    2918 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 20:17:23.075886    2918 start_flags.go:317] config:
	{Name:download-only-201722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-201722 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:17:23.101878    2918 out.go:97] Starting control plane node download-only-201722 in cluster download-only-201722
	I1025 20:17:23.101921    2918 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:17:23.123656    2918 out.go:97] Pulling base image ...
	I1025 20:17:23.123763    2918 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 20:17:23.123845    2918 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:17:23.178866    2918 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 20:17:23.178888    2918 cache.go:57] Caching tarball of preloaded images
	I1025 20:17:23.179087    2918 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 20:17:23.200911    2918 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 20:17:23.200946    2918 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:17:23.210563    2918 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 to local cache
	I1025 20:17:23.210743    2918 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local cache directory
	I1025 20:17:23.210863    2918 image.go:120] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 to local cache
	I1025 20:17:23.326017    2918 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 20:17:27.912295    2918 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:17:27.912429    2918 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:17:28.454236    2918 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 20:17:28.454461    2918 profile.go:148] Saving config to /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/download-only-201722/config.json ...
	I1025 20:17:28.454481    2918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/download-only-201722/config.json: {Name:mkcdee5f38eedb00a8aada5a2e625fc0afe25bd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 20:17:28.454771    2918 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 20:17:28.455108    2918 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-201722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (7.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-201722 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-201722 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker : (7.51003861s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (7.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-201722
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-201722: exit status 85 (288.902414ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-201722 | jenkins | v1.27.1 | 25 Oct 22 20:17 PDT |          |
	|         | -p download-only-201722        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-201722 | jenkins | v1.27.1 | 25 Oct 22 20:17 PDT |          |
	|         | -p download-only-201722        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 20:17:38
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 20:17:38.519858    2956 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:17:38.520022    2956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:17:38.520028    2956 out.go:309] Setting ErrFile to fd 2...
	I1025 20:17:38.520032    2956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:17:38.520150    2956 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	W1025 20:17:38.520235    2956 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/14956-2080/.minikube/config/config.json: open /Users/jenkins/minikube-integration/14956-2080/.minikube/config/config.json: no such file or directory
	I1025 20:17:38.520556    2956 out.go:303] Setting JSON to true
	I1025 20:17:38.535504    2956 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1027,"bootTime":1666753231,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:17:38.535616    2956 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:17:38.557722    2956 out.go:97] [download-only-201722] minikube v1.27.1 on Darwin 12.6
	I1025 20:17:38.557898    2956 notify.go:220] Checking for updates...
	I1025 20:17:38.579803    2956 out.go:169] MINIKUBE_LOCATION=14956
	I1025 20:17:38.601873    2956 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:17:38.624101    2956 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:17:38.645939    2956 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:17:38.667578    2956 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	W1025 20:17:38.710847    2956 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 20:17:38.711478    2956 config.go:180] Loaded profile config "download-only-201722": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1025 20:17:38.711556    2956 start.go:716] api.Load failed for download-only-201722: filestore "download-only-201722": Docker machine "download-only-201722" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 20:17:38.711649    2956 driver.go:365] Setting default libvirt URI to qemu:///system
	W1025 20:17:38.711683    2956 start.go:716] api.Load failed for download-only-201722: filestore "download-only-201722": Docker machine "download-only-201722" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 20:17:38.780521    2956 docker.go:137] docker version: linux-20.10.17
	I1025 20:17:38.780631    2956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:17:38.909779    2956 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-10-26 03:17:38.857767903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:17:38.931743    2956 out.go:97] Using the docker driver based on existing profile
	I1025 20:17:38.931781    2956 start.go:282] selected driver: docker
	I1025 20:17:38.931833    2956 start.go:808] validating driver "docker" against &{Name:download-only-201722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-201722 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/so
cket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:17:38.932143    2956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:17:39.062747    2956 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-10-26 03:17:39.010585698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:17:39.064834    2956 cni.go:95] Creating CNI manager for ""
	I1025 20:17:39.064853    2956 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 20:17:39.064864    2956 start_flags.go:317] config:
	{Name:download-only-201722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-201722 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:17:39.086715    2956 out.go:97] Starting control plane node download-only-201722 in cluster download-only-201722
	I1025 20:17:39.086805    2956 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 20:17:39.108677    2956 out.go:97] Pulling base image ...
	I1025 20:17:39.108791    2956 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:17:39.108875    2956 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 20:17:39.167577    2956 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 20:17:39.167596    2956 cache.go:57] Caching tarball of preloaded images
	I1025 20:17:39.167784    2956 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 20:17:39.189596    2956 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1025 20:17:39.189665    2956 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 20:17:39.192959    2956 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 to local cache
	I1025 20:17:39.193063    2956 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local cache directory
	I1025 20:17:39.193082    2956 image.go:63] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local cache directory, skipping pull
	I1025 20:17:39.193090    2956 image.go:104] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in cache, skipping pull
	I1025 20:17:39.193102    2956 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 as a tarball
	I1025 20:17:39.294868    2956 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/14956-2080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-201722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.73s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-201722
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.42s)

                                                
                                    
x
+
TestDownloadOnlyKic (14.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-201747 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-201747 --force --alsologtostderr --driver=docker : (13.495680816s)
helpers_test.go:175: Cleaning up "download-docker-201747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-201747
--- PASS: TestDownloadOnlyKic (14.60s)

                                                
                                    
x
+
TestBinaryMirror (1.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-201802 --alsologtostderr --binary-mirror http://127.0.0.1:49397 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-201802 --alsologtostderr --binary-mirror http://127.0.0.1:49397 --driver=docker : (1.015081668s)
helpers_test.go:175: Cleaning up "binary-mirror-201802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-201802
--- PASS: TestBinaryMirror (1.66s)

                                                
                                    
x
+
TestOffline (50.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-205230 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-205230 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (47.341020743s)
helpers_test.go:175: Cleaning up "offline-docker-205230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-205230
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-205230: (3.229870063s)
--- PASS: TestOffline (50.57s)

                                                
                                    
x
+
TestAddons/Setup (128.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-201804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-201804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m8.339147872s)
--- PASS: TestAddons/Setup (128.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 6.11519ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-2zzgn" [2b4caef1-4f97-4910-be23-a2127a7c2ee3] Running
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008383829s
addons_test.go:367: (dbg) Run:  kubectl --context addons-201804 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-201804 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.18s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.334438ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-qc9vn" [60b0a9e4-47bc-4bad-ad1a-08e3fdc3aae1] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012478688s
addons_test.go:425: (dbg) Run:  kubectl --context addons-201804 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-201804 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.705756554s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-201804 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.18s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 4.764832ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-201804 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-201804 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-201804 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [63efe2c7-a0d4-4a09-be24-a6f641672638] Pending
helpers_test.go:342: "task-pv-pod" [63efe2c7-a0d4-4a09-be24-a6f641672638] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [63efe2c7-a0d4-4a09-be24-a6f641672638] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.010355278s
addons_test.go:536: (dbg) Run:  kubectl --context addons-201804 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-201804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-201804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-201804 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-201804 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-201804 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-201804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-201804 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [278ac597-d2e3-4b83-8c93-def218e869ed] Pending
helpers_test.go:342: "task-pv-pod-restore" [278ac597-d2e3-4b83-8c93-def218e869ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [278ac597-d2e3-4b83-8c93-def218e869ed] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.010214382s
addons_test.go:578: (dbg) Run:  kubectl --context addons-201804 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-201804 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-201804 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-201804 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-201804 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.830367194s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-201804 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-201804 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-201804 --alsologtostderr -v=1: (1.270395162s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-g9v5k" [93293944-2e73-4a44-804a-d24e67f2f1e6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-5f4cf474d8-g9v5k" [93293944-2e73-4a44-804a-d24e67f2f1e6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.008418051s
--- PASS: TestAddons/parallel/Headlamp (10.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (16.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-201804 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-201804 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0b75f8c7-d64f-4800-adbd-772bb90bd208] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [0b75f8c7-d64f-4800-adbd-772bb90bd208] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.007794393s
addons_test.go:624: (dbg) Run:  kubectl --context addons-201804 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-201804 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-201804 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-201804 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-201804 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-201804 addons disable gcp-auth --alsologtostderr -v=1: (6.620575939s)
--- PASS: TestAddons/serial/GCPAuth (16.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-201804
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-201804: (12.469090272s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-201804
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-201804
--- PASS: TestAddons/StoppedEnableDisable (12.92s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.78s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.78s)

                                                
                                    
x
+
TestErrorSpam/setup (26.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-202136 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-202136 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 --driver=docker : (26.292715296s)
--- PASS: TestErrorSpam/setup (26.29s)

                                                
                                    
x
+
TestErrorSpam/start (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 start --dry-run
--- PASS: TestErrorSpam/start (2.14s)

                                                
                                    
x
+
TestErrorSpam/status (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 status
--- PASS: TestErrorSpam/status (1.30s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (13.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 stop: (12.42554063s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-202136 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-202136 stop
--- PASS: TestErrorSpam/stop (13.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/14956-2080/.minikube/files/etc/test/nested/copy/2916/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-202225 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-202225 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (43.092801162s)
--- PASS: TestFunctional/serial/StartWithProxy (43.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-202225 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-202225 --alsologtostderr -v=8: (51.537899703s)
functional_test.go:656: soft start took 51.538526381s for "functional-202225" cluster.
--- PASS: TestFunctional/serial/SoftStart (51.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-202225 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 cache add k8s.gcr.io/pause:3.1: (2.106084566s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 cache add k8s.gcr.io/pause:3.3: (2.030211851s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 cache add k8s.gcr.io/pause:latest: (1.904395515s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-202225 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2657228831/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cache add minikube-local-cache-test:functional-202225
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 cache add minikube-local-cache-test:functional-202225: (1.316621293s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cache delete minikube-local-cache-test:functional-202225
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-202225
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (412.293905ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 cache reload: (1.289193554s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 kubectl -- --context functional-202225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-202225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-202225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-202225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.778180564s)
functional_test.go:754: restart took 52.781141487s for "functional-202225" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-202225 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 logs: (2.937370004s)
--- PASS: TestFunctional/serial/LogsCmd (2.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd1434915039/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd1434915039/001/logs.txt: (3.00258223s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 config get cpus: exit status 14 (51.374076ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 config get cpus: exit status 14 (96.190047ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-202225 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-202225 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 5243: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-202225 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-202225 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (617.338294ms)

                                                
                                                
-- stdout --
	* [functional-202225] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:26:15.004426    5182 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:26:15.004600    5182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:26:15.004605    5182 out.go:309] Setting ErrFile to fd 2...
	I1025 20:26:15.004609    5182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:26:15.004724    5182 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:26:15.005125    5182 out.go:303] Setting JSON to false
	I1025 20:26:15.019767    5182 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1544,"bootTime":1666753231,"procs":347,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:26:15.019855    5182 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:26:15.041581    5182 out.go:177] * [functional-202225] minikube v1.27.1 on Darwin 12.6
	I1025 20:26:15.069594    5182 notify.go:220] Checking for updates...
	I1025 20:26:15.091184    5182 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:26:15.113357    5182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:26:15.135170    5182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:26:15.156265    5182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:26:15.177627    5182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:26:15.200038    5182 config.go:180] Loaded profile config "functional-202225": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:26:15.200665    5182 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:26:15.269081    5182 docker.go:137] docker version: linux-20.10.17
	I1025 20:26:15.269202    5182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:26:15.397048    5182 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2022-10-26 03:26:15.329671747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:26:15.439526    5182 out.go:177] * Using the docker driver based on existing profile
	I1025 20:26:15.460400    5182 start.go:282] selected driver: docker
	I1025 20:26:15.460411    5182 start.go:808] validating driver "docker" against &{Name:functional-202225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-202225 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:26:15.460533    5182 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:26:15.483313    5182 out.go:177] 
	W1025 20:26:15.504750    5182 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 20:26:15.527545    5182 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-202225 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-202225 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-202225 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (639.974808ms)

                                                
                                                
-- stdout --
	* [functional-202225] minikube v1.27.1 sur Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:26:13.054699    5139 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:26:13.054833    5139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:26:13.054838    5139 out.go:309] Setting ErrFile to fd 2...
	I1025 20:26:13.054841    5139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:26:13.054955    5139 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:26:13.055341    5139 out.go:303] Setting JSON to false
	I1025 20:26:13.070702    5139 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1542,"bootTime":1666753231,"procs":347,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 20:26:13.070775    5139 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 20:26:13.092768    5139 out.go:177] * [functional-202225] minikube v1.27.1 sur Darwin 12.6
	I1025 20:26:13.114835    5139 notify.go:220] Checking for updates...
	I1025 20:26:13.136566    5139 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 20:26:13.157644    5139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	I1025 20:26:13.179428    5139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 20:26:13.222937    5139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 20:26:13.266873    5139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	I1025 20:26:13.295600    5139 config.go:180] Loaded profile config "functional-202225": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:26:13.296204    5139 driver.go:365] Setting default libvirt URI to qemu:///system
	I1025 20:26:13.363469    5139 docker.go:137] docker version: linux-20.10.17
	I1025 20:26:13.363608    5139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 20:26:13.492995    5139 info.go:266] docker info: {ID:PBOW:BTQ2:BYXZ:BOUN:R6IY:DRGO:NEBM:NTZZ:HDOO:ZQJB:IJ64:4YNX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2022-10-26 03:26:13.423066013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232588288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 20:26:13.515623    5139 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1025 20:26:13.537595    5139 start.go:282] selected driver: docker
	I1025 20:26:13.537623    5139 start.go:808] validating driver "docker" against &{Name:functional-202225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-202225 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 20:26:13.537803    5139 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 20:26:13.562655    5139 out.go:177] 
	W1025 20:26:13.584758    5139 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 20:26:13.606437    5139 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-202225 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-202225 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-hs99r" [54b6cc81-8767-4809-a695-478734f20127] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-hs99r" [54b6cc81-8767-4809-a695-478734f20127] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.012583854s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 service list
functional_test.go:1449: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 service list: (1.055021557s)
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 service --namespace=default --https --url hello-node: (2.026339902s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50366
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 service hello-node --url --format={{.IP}}: (2.029683437s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 service hello-node --url
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 service hello-node --url: (2.028442757s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50401
--- PASS: TestFunctional/parallel/ServiceCmd (15.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [673159cc-58f9-4cc6-be55-a37878098eb6] Running
E1025 20:25:13.088250    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:13.728498    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01143896s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-202225 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-202225 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-202225 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-202225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [d20788a0-b78f-4529-970c-adf044e6f405] Pending
helpers_test.go:342: "sp-pod" [d20788a0-b78f-4529-970c-adf044e6f405] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [d20788a0-b78f-4529-970c-adf044e6f405] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.052539663s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-202225 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-202225 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-202225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [43972509-3657-4b78-a03d-658337a07bc2] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [43972509-3657-4b78-a03d-658337a07bc2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [43972509-3657-4b78-a03d-658337a07bc2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011212356s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-202225 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh -n functional-202225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 cp functional-202225:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2475301633/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh -n functional-202225 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-202225 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-z8dtv" [86b17613-fad9-4157-bd3b-69fd8c34e91c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-z8dtv" [86b17613-fad9-4157-bd3b-69fd8c34e91c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.008451932s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;": exit status 1 (125.012943ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;": exit status 1 (122.557633ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;": exit status 1 (109.195171ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-202225 exec mysql-596b7fcdbf-z8dtv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/2916/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /etc/test/nested/copy/2916/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/2916.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /etc/ssl/certs/2916.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/2916.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /usr/share/ca-certificates/2916.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/29162.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /etc/ssl/certs/29162.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/29162.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /usr/share/ca-certificates/29162.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-202225 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 ssh "sudo systemctl is-active crio": exit status 1 (585.55286ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2218: (dbg) Run:  ls ./licenses
functional_test.go:2226: (dbg) Run:  cat ./licenses/cloud.google.com/go/compute/metadata/LICENSE
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-202225 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-202225
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-202225
docker.io/kubernetesui/metrics-scraper:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls --format table
2022/10/25 20:26:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-202225 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-202225 | ae3acbe14053f | 30B    |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-202225 | bf7327bc99a89 | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-202225 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| docker.io/library/mysql                     | 5.7               | 14905234a4ed4 | 495MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-202225 image ls --format json:
[{"id":"ae3acbe14053f92a156839c13a1588cc3d872f645f91491b99a63961906762ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-202225"],"size":"30"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23600000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-202225"],"size":"3
2900000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDi
gests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"bf7327bc99a8926b30a465b3ed618c7183e067aaeaf6799c7567adfe7a63a46c","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-202225"]
,"size":"1240000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-202225 image ls --format yaml:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: ae3acbe14053f92a156839c13a1588cc3d872f645f91491b99a63961906762ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-202225
size: "30"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-202225
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 ssh pgrep buildkitd: exit status 1 (413.710699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image build -t localhost/my-image:functional-202225 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image build -t localhost/my-image:functional-202225 testdata/build: (2.512617971s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-202225 image build -t localhost/my-image:functional-202225 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in aad27f522a06
Removing intermediate container aad27f522a06
---> 15fca703c831
Step 3/3 : ADD content.txt /
---> bf7327bc99a8
Successfully built bf7327bc99a8
Successfully tagged localhost/my-image:functional-202225
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
E1025 20:25:12.448970    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:12.455245    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:12.465379    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:12.485537    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:12.525651    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:12.605821    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:12.766028    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.506621663s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-202225
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image load --daemon gcr.io/google-containers/addon-resizer:functional-202225
E1025 20:25:15.008712    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:25:17.570725    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image load --daemon gcr.io/google-containers/addon-resizer:functional-202225: (2.698750799s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image load --daemon gcr.io/google-containers/addon-resizer:functional-202225

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image load --daemon gcr.io/google-containers/addon-resizer:functional-202225: (2.092939039s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E1025 20:25:22.691219    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.368691817s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-202225
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image load --daemon gcr.io/google-containers/addon-resizer:functional-202225

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image load --daemon gcr.io/google-containers/addon-resizer:functional-202225: (2.929145092s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image save gcr.io/google-containers/addon-resizer:functional-202225 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image save gcr.io/google-containers/addon-resizer:functional-202225 /Users/jenkins/workspace/addon-resizer-save.tar: (1.108417877s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image rm gcr.io/google-containers/addon-resizer:functional-202225
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.220706428s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-202225
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 image save --daemon gcr.io/google-containers/addon-resizer:functional-202225

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-202225 image save --daemon gcr.io/google-containers/addon-resizer:functional-202225: (2.382429627s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-202225
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-202225 docker-env) && out/minikube-darwin-amd64 status -p functional-202225"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-202225 docker-env) && out/minikube-darwin-amd64 status -p functional-202225": (1.031868345s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-202225 docker-env) && docker images"
E1025 20:25:32.932292    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-202225 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-202225 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [e862790c-1b7f-4075-a5ab-45d272b615a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [e862790c-1b7f-4075-a5ab-45d272b615a3] Running
E1025 20:25:53.412829    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.01123375s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-202225 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-202225 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 4900: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-202225 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port469845209/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1666754757472477000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port469845209/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1666754757472477000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port469845209/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1666754757472477000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port469845209/001/test-1666754757472477000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (430.822454ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 03:25 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 03:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 03:25 test-1666754757472477000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh cat /mount-9p/test-1666754757472477000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-202225 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [29fdf890-d79c-49e3-bf32-163b988736f8] Pending
helpers_test.go:342: "busybox-mount" [29fdf890-d79c-49e3-bf32-163b988736f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [29fdf890-d79c-49e3-bf32-163b988736f8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:342: "busybox-mount" [29fdf890-d79c-49e3-bf32-163b988736f8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.006027683s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-202225 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-202225 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port469845209/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-202225 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2551858656/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (522.416306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-202225 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2551858656/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-202225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-202225 ssh "sudo umount -f /mount-9p": exit status 1 (401.23516ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-202225 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-202225 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2551858656/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "447.424211ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "76.365729ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "437.228884ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "78.63923ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-202225
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-202225
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-202225
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-203351 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-203351 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (42.975490371s)
--- PASS: TestJSONOutput/start/Command (42.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-203351 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-203351 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-203351 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-203351 --output=json --user=testUser: (12.306293601s)
--- PASS: TestJSONOutput/stop/Command (12.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-203450 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-203450 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (327.88445ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4896a3c2-6a57-4f7a-9c1a-9bf1aa263741","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-203450] minikube v1.27.1 on Darwin 12.6","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e65f46e4-ea16-466b-b2c9-57fce4b5652c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14956"}}
	{"specversion":"1.0","id":"0da249df-fb1a-420b-9eef-6f0e8aa351a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig"}}
	{"specversion":"1.0","id":"09136281-b595-4193-8407-6bb8958cbe43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"cf7961be-2fc5-4c06-8a1d-1c0aea0e7103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7451c690-8158-4f93-82fb-63cc323e3246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube"}}
	{"specversion":"1.0","id":"28e3f9b0-c16f-4ac8-878d-714746182499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-203450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-203450
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-203450 --network=
E1025 20:35:12.455774    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:35:12.991122    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-203450 --network=: (27.072946036s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-203450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-203450
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-203450: (2.642390957s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-203520 --network=bridge
E1025 20:35:40.688354    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-203520 --network=bridge: (25.806933476s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-203520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-203520
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-203520: (2.546301227s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.42s)

                                                
                                    
x
+
TestKicExistingNetwork (29.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-203549 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-203549 --network=existing-network: (26.709653873s)
helpers_test.go:175: Cleaning up "existing-network-203549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-203549
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-203549: (2.493940935s)
--- PASS: TestKicExistingNetwork (29.60s)

                                                
                                    
x
+
TestKicCustomSubnet (30.7s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-203618 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-203618 --subnet=192.168.60.0/24: (27.945737371s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-203618 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-203618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-203618
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-203618: (2.687670924s)
--- PASS: TestKicCustomSubnet (30.70s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (60.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-203649 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-203649 --driver=docker : (26.924748584s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-203649 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-203649 --driver=docker : (26.694393019s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-203649
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-203649
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-203649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-203649
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-203649: (2.667329157s)
helpers_test.go:175: Cleaning up "first-203649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-203649
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-203649: (2.712512877s)
--- PASS: TestMinikubeProfile (60.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-203750 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-203750 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.417503756s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-203750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-203750 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-203750 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.470194967s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-203750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.19s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-203750 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-203750 --alsologtostderr -v=5: (2.192876251s)
--- PASS: TestMountStart/serial/DeleteFirst (2.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-203750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-203750
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-203750: (1.615602129s)
--- PASS: TestMountStart/serial/Stop (1.62s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-203750
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-203750: (4.093416117s)
--- PASS: TestMountStart/serial/RestartStopped (5.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-203750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-203818 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-203818 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m14.810127943s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-203818 -- rollout status deployment/busybox: (3.212806273s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-h6pzg -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-x5dqw -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-h6pzg -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-x5dqw -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-h6pzg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-x5dqw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-h6pzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-h6pzg -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-x5dqw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-203818 -- exec busybox-65db55d5d6-x5dqw -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (36.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-203818 -v 3 --alsologtostderr
E1025 20:40:12.437050    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:40:12.973317    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-203818 -v 3 --alsologtostderr: (35.015299359s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr: (1.066186469s)
--- PASS: TestMultiNode/serial/AddNode (36.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 status --output json --alsologtostderr: (1.020815211s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp testdata/cp-test.txt multinode-203818:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3396189668/001/cp-test_multinode-203818.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818:/home/docker/cp-test.txt multinode-203818-m02:/home/docker/cp-test_multinode-203818_multinode-203818-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m02 "sudo cat /home/docker/cp-test_multinode-203818_multinode-203818-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818:/home/docker/cp-test.txt multinode-203818-m03:/home/docker/cp-test_multinode-203818_multinode-203818-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m03 "sudo cat /home/docker/cp-test_multinode-203818_multinode-203818-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp testdata/cp-test.txt multinode-203818-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3396189668/001/cp-test_multinode-203818-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818-m02:/home/docker/cp-test.txt multinode-203818:/home/docker/cp-test_multinode-203818-m02_multinode-203818.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818 "sudo cat /home/docker/cp-test_multinode-203818-m02_multinode-203818.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818-m02:/home/docker/cp-test.txt multinode-203818-m03:/home/docker/cp-test_multinode-203818-m02_multinode-203818-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m03 "sudo cat /home/docker/cp-test_multinode-203818-m02_multinode-203818-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp testdata/cp-test.txt multinode-203818-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3396189668/001/cp-test_multinode-203818-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt multinode-203818:/home/docker/cp-test_multinode-203818-m03_multinode-203818.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818 "sudo cat /home/docker/cp-test_multinode-203818-m03_multinode-203818.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 cp multinode-203818-m03:/home/docker/cp-test.txt multinode-203818-m02:/home/docker/cp-test_multinode-203818-m03_multinode-203818-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 ssh -n multinode-203818-m02 "sudo cat /home/docker/cp-test_multinode-203818-m03_multinode-203818-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 node stop m03: (12.445089047s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-203818 status: exit status 7 (782.213791ms)

                                                
                                                
-- stdout --
	multinode-203818
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-203818-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-203818-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr: exit status 7 (783.72672ms)

                                                
                                                
-- stdout --
	multinode-203818
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-203818-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-203818-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:40:44.962856    8504 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:40:44.963027    8504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:40:44.963032    8504 out.go:309] Setting ErrFile to fd 2...
	I1025 20:40:44.963037    8504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:40:44.963158    8504 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:40:44.963327    8504 out.go:303] Setting JSON to false
	I1025 20:40:44.963348    8504 mustload.go:65] Loading cluster: multinode-203818
	I1025 20:40:44.963388    8504 notify.go:220] Checking for updates...
	I1025 20:40:44.963639    8504 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:40:44.963652    8504 status.go:255] checking status of multinode-203818 ...
	I1025 20:40:44.963985    8504 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:40:45.027973    8504 status.go:330] multinode-203818 host status = "Running" (err=<nil>)
	I1025 20:40:45.027993    8504 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:40:45.028200    8504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818
	I1025 20:40:45.091014    8504 host.go:66] Checking if "multinode-203818" exists ...
	I1025 20:40:45.091274    8504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:40:45.091348    8504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:40:45.154290    8504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50935 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818/id_rsa Username:docker}
	I1025 20:40:45.240079    8504 ssh_runner.go:195] Run: systemctl --version
	I1025 20:40:45.244635    8504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:40:45.253600    8504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-203818
	I1025 20:40:45.318961    8504 kubeconfig.go:92] found "multinode-203818" server: "https://127.0.0.1:50934"
	I1025 20:40:45.318987    8504 api_server.go:165] Checking apiserver status ...
	I1025 20:40:45.319027    8504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 20:40:45.329008    8504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1728/cgroup
	W1025 20:40:45.336504    8504 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1728/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 20:40:45.336545    8504 ssh_runner.go:195] Run: ls
	I1025 20:40:45.340149    8504 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50934/healthz ...
	I1025 20:40:45.345440    8504 api_server.go:278] https://127.0.0.1:50934/healthz returned 200:
	ok
	I1025 20:40:45.345451    8504 status.go:421] multinode-203818 apiserver status = Running (err=<nil>)
	I1025 20:40:45.345461    8504 status.go:257] multinode-203818 status: &{Name:multinode-203818 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 20:40:45.345474    8504 status.go:255] checking status of multinode-203818-m02 ...
	I1025 20:40:45.345685    8504 cli_runner.go:164] Run: docker container inspect multinode-203818-m02 --format={{.State.Status}}
	I1025 20:40:45.409421    8504 status.go:330] multinode-203818-m02 host status = "Running" (err=<nil>)
	I1025 20:40:45.409440    8504 host.go:66] Checking if "multinode-203818-m02" exists ...
	I1025 20:40:45.409679    8504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203818-m02
	I1025 20:40:45.473564    8504 host.go:66] Checking if "multinode-203818-m02" exists ...
	I1025 20:40:45.473816    8504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 20:40:45.473861    8504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203818-m02
	I1025 20:40:45.537568    8504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50996 SSHKeyPath:/Users/jenkins/minikube-integration/14956-2080/.minikube/machines/multinode-203818-m02/id_rsa Username:docker}
	I1025 20:40:45.623526    8504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 20:40:45.632360    8504 status.go:257] multinode-203818-m02 status: &{Name:multinode-203818-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 20:40:45.632379    8504 status.go:255] checking status of multinode-203818-m03 ...
	I1025 20:40:45.632610    8504 cli_runner.go:164] Run: docker container inspect multinode-203818-m03 --format={{.State.Status}}
	I1025 20:40:45.695849    8504 status.go:330] multinode-203818-m03 host status = "Stopped" (err=<nil>)
	I1025 20:40:45.695878    8504 status.go:343] host is not running, skipping remaining checks
	I1025 20:40:45.695887    8504 status.go:257] multinode-203818-m03 status: &{Name:multinode-203818-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 node start m03 --alsologtostderr: (18.306750961s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 status: (1.018120042s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-203818
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-203818
E1025 20:41:35.483256    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-203818: (36.683871782s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-203818 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-203818 --wait=true -v=8 --alsologtostderr: (1m12.795378837s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-203818
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (17.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 node delete m03: (16.098990936s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (17.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-203818 stop: (24.592441371s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-203818 status: exit status 7 (177.969979ms)

                                                
                                                
-- stdout --
	multinode-203818
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-203818-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-203818 status --alsologtostderr: exit status 7 (175.056262ms)

                                                
                                                
-- stdout --
	multinode-203818
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-203818-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 20:43:36.556728    9114 out.go:296] Setting OutFile to fd 1 ...
	I1025 20:43:36.556895    9114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:43:36.556900    9114 out.go:309] Setting ErrFile to fd 2...
	I1025 20:43:36.556904    9114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 20:43:36.557008    9114 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/14956-2080/.minikube/bin
	I1025 20:43:36.557196    9114 out.go:303] Setting JSON to false
	I1025 20:43:36.557216    9114 mustload.go:65] Loading cluster: multinode-203818
	I1025 20:43:36.557254    9114 notify.go:220] Checking for updates...
	I1025 20:43:36.557536    9114 config.go:180] Loaded profile config "multinode-203818": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 20:43:36.557549    9114 status.go:255] checking status of multinode-203818 ...
	I1025 20:43:36.557883    9114 cli_runner.go:164] Run: docker container inspect multinode-203818 --format={{.State.Status}}
	I1025 20:43:36.619754    9114 status.go:330] multinode-203818 host status = "Stopped" (err=<nil>)
	I1025 20:43:36.619769    9114 status.go:343] host is not running, skipping remaining checks
	I1025 20:43:36.619775    9114 status.go:257] multinode-203818 status: &{Name:multinode-203818 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 20:43:36.619793    9114 status.go:255] checking status of multinode-203818-m02 ...
	I1025 20:43:36.620023    9114 cli_runner.go:164] Run: docker container inspect multinode-203818-m02 --format={{.State.Status}}
	I1025 20:43:36.680930    9114 status.go:330] multinode-203818-m02 host status = "Stopped" (err=<nil>)
	I1025 20:43:36.680959    9114 status.go:343] host is not running, skipping remaining checks
	I1025 20:43:36.680966    9114 status.go:257] multinode-203818-m02 status: &{Name:multinode-203818-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-203818
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-203818-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-203818-m02 --driver=docker : exit status 14 (396.425034ms)

                                                
                                                
-- stdout --
	* [multinode-203818-m02] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-203818-m02' is duplicated with machine name 'multinode-203818-m02' in profile 'multinode-203818'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-203818-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-203818-m03 --driver=docker : (27.946062711s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-203818
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-203818: exit status 80 (512.580732ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-203818
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-203818-m03 already exists in multinode-203818-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-203818-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-203818-m03: (2.688496788s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.61s)

                                                
                                    
x
+
TestPreload (136.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-204722 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-204722 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (56.574975586s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-204722 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-204722 -- docker pull gcr.io/k8s-minikube/busybox: (2.10603673s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-204722 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-204722 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6: (1m14.135518147s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-204722 -- docker images
helpers_test.go:175: Cleaning up "test-preload-204722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-204722
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-204722: (2.870140204s)
--- PASS: TestPreload (136.13s)

                                                
                                    
x
+
TestScheduledStopUnix (101.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-204939 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-204939 --memory=2048 --driver=docker : (26.933304721s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-204939 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-204939 -n scheduled-stop-204939
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-204939 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-204939 --cancel-scheduled
E1025 20:50:12.436199    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/addons-201804/client.crt: no such file or directory
E1025 20:50:12.970979    2916 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/functional-202225/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-204939 -n scheduled-stop-204939
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-204939
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-204939 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-204939
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-204939: exit status 7 (119.478024ms)

                                                
                                                
-- stdout --
	scheduled-stop-204939
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-204939 -n scheduled-stop-204939
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-204939 -n scheduled-stop-204939: exit status 7 (113.070048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-204939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-204939
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-204939: (2.399949174s)
--- PASS: TestScheduledStopUnix (101.29s)

                                                
                                    
x
+
TestSkaffold (56.7s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1401798030 version
skaffold_test.go:63: skaffold version: v2.0.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-205120 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-205120 --memory=2600 --driver=docker : (25.640700408s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1401798030 run --minikube-profile skaffold-205120 --kube-context skaffold-205120 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1401798030 run --minikube-profile skaffold-205120 --kube-context skaffold-205120 --status-check=true --port-forward=false --interactive=false: (16.355044981s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-666bf98f6-8ngwl" [9bf0624d-59ec-47e3-ad49-8818efb544c9] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013619485s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-7f99497657-pzxtf" [31a65392-1264-4140-8737-6fe6e473a321] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006380973s
helpers_test.go:175: Cleaning up "skaffold-205120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-205120
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-205120: (2.944047197s)
--- PASS: TestSkaffold (56.70s)

                                                
                                    
x
+
TestInsufficientStorage (13.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-205217 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-205217 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (10.33615244s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e2688f74-0091-4307-b5f2-17d7ff03e102","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-205217] minikube v1.27.1 on Darwin 12.6","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"113fb4e6-7907-4aa9-8da6-0114bca96da2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14956"}}
	{"specversion":"1.0","id":"93c89733-9c8d-466b-b7f5-3db5c5c3e42b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig"}}
	{"specversion":"1.0","id":"0b66f210-bf58-4e11-a190-cb3ada73846f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f9144b94-239a-4ae5-8dac-a9fad01f3a99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0ea502b7-c8e5-4abc-b771-4eb0a68bf54a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube"}}
	{"specversion":"1.0","id":"db5fed30-2837-4085-9bc1-9c9077f8ff6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e48d1c61-8971-4daf-8b0c-c2477922f5dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7caee8ae-9d56-45c0-80fd-c058883672db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"147a2cda-3bae-45e7-836e-79afce48a2f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"024841f6-3e25-43f1-8ce2-81b0f1196e62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-205217 in cluster insufficient-storage-205217","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1275861c-30a0-4b6b-ad7a-387e86b04a15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2471240-61e1-4769-b4f7-2322af6b6de1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"53465871-57b5-4331-8b09-597088b192de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-205217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-205217 --output=json --layout=cluster: exit status 7 (406.339013ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-205217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-205217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:52:27.826223   10828 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-205217" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-205217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-205217 --output=json --layout=cluster: exit status 7 (415.799075ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-205217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-205217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 20:52:28.242792   10838 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-205217" does not appear in /Users/jenkins/minikube-integration/14956-2080/kubeconfig
	E1025 20:52:28.250900   10838 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/14956-2080/.minikube/profiles/insufficient-storage-205217/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-205217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-205217
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-205217: (2.438917419s)
--- PASS: TestInsufficientStorage (13.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-212109 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (352.236997ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-212109] minikube v1.27.1 on Darwin 12.6
	  - MINIKUBE_LOCATION=14956
	  - KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/14956-2080/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-212109 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-212109 "sudo systemctl is-active --quiet service kubelet": exit status 80 (203.177506ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_a637006dfde1245e93469fe3227a30492e7a4c9f_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (3.631721029s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (3.634451063s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-212109 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-212109 "sudo systemctl is-active --quiet service kubelet": exit status 80 (203.12793ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-212109": docker container inspect NoKubernetes-212109 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-212109
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_a637006dfde1245e93469fe3227a30492e7a4c9f_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.27.1 on darwin
- MINIKUBE_LOCATION=14956
- KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3201902872/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3201902872/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3201902872/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3201902872/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.73s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.27.1 on darwin
- MINIKUBE_LOCATION=14956
- KUBECONFIG=/Users/jenkins/minikube-integration/14956-2080/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current293870606/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current293870606/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current293870606/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current293870606/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-213632 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (18/246)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 9.763111ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-724v8" [054f47e7-2150-472f-859c-8c1a1b3995c9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007886321s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-kvp4x" [582740b5-b192-469c-99c1-fb74a063eaca] Running
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00917422s
addons_test.go:292: (dbg) Run:  kubectl --context addons-201804 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-201804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-201804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.804615869s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-201804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-201804 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-201804 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [1d99257b-e8f1-40f2-8d6b-d63ee5954298] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [1d99257b-e8f1-40f2-8d6b-d63ee5954298] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.009534971s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-201804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.86s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-202225 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-202225 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-z8tfv" [3e306d9f-0c8c-44d9-b33c-a4f4a2907d5a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-z8tfv" [3e306d9f-0c8c-44d9-b33c-a4f4a2907d5a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.009463218s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-205230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-205230
--- SKIP: TestNetworkPlugins/group/flannel (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-205231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-205231
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-213432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-213432
--- SKIP: TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                    
Copied to clipboard