Test Report: Docker_macOS 14606

                    
                      584c9efc3417eaa1e4c58e683eaf61fb634889e6:2022-07-18:24912
                    
                

Test fail (78/245)

Order failed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0.1
34 TestCertOptions 1.45
35 TestCertExpiration 181.66
36 TestDockerFlags 1.39
37 TestForceSystemdFlag 1.25
38 TestForceSystemdEnv 1.36
136 TestIngressAddonLegacy/StartLegacyK8sCluster 253.53
138 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 89.58
139 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 89.53
140 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.51
203 TestPreload 266.46
209 TestRunningBinaryUpgrade 96.95
211 TestKubernetesUpgrade 57.27
212 TestMissingContainerUpgrade 239.99
225 TestStoppedBinaryUpgrade/Upgrade 155.27
226 TestStoppedBinaryUpgrade/MinikubeLogs 0.51
235 TestPause/serial/Start 0.67
238 TestNoKubernetes/serial/StartWithK8s 0.73
239 TestNoKubernetes/serial/StartWithStopK8s 0.68
240 TestNoKubernetes/serial/Start 0.76
242 TestNoKubernetes/serial/ProfileList 0.39
243 TestNoKubernetes/serial/Stop 0.32
244 TestNoKubernetes/serial/StartNoArgs 0.79
248 TestNetworkPlugins/group/auto/Start 0.54
249 TestNetworkPlugins/group/kindnet/Start 0.5
250 TestNetworkPlugins/group/cilium/Start 0.54
251 TestNetworkPlugins/group/calico/Start 0.52
252 TestNetworkPlugins/group/false/Start 0.52
253 TestNetworkPlugins/group/bridge/Start 0.52
254 TestNetworkPlugins/group/enable-default-cni/Start 0.52
255 TestNetworkPlugins/group/kubenet/Start 0.5
257 TestStartStop/group/old-k8s-version/serial/FirstStart 0.69
258 TestStartStop/group/old-k8s-version/serial/DeployApp 0.4
259 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.38
260 TestStartStop/group/old-k8s-version/serial/Stop 0.3
261 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.46
262 TestStartStop/group/old-k8s-version/serial/SecondStart 0.68
263 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.18
264 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.24
265 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
266 TestStartStop/group/old-k8s-version/serial/Pause 0.49
268 TestStartStop/group/no-preload/serial/FirstStart 0.67
269 TestStartStop/group/no-preload/serial/DeployApp 0.39
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.39
271 TestStartStop/group/no-preload/serial/Stop 0.3
272 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.44
273 TestStartStop/group/no-preload/serial/SecondStart 0.71
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.18
275 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.21
276 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
277 TestStartStop/group/no-preload/serial/Pause 0.48
279 TestStartStop/group/embed-certs/serial/FirstStart 0.71
280 TestStartStop/group/embed-certs/serial/DeployApp 0.46
281 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.35
282 TestStartStop/group/embed-certs/serial/Stop 0.3
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.44
284 TestStartStop/group/embed-certs/serial/SecondStart 0.69
285 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.18
286 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.21
287 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
288 TestStartStop/group/embed-certs/serial/Pause 0.48
290 TestStartStop/group/default-k8s-different-port/serial/FirstStart 0.68
291 TestStartStop/group/default-k8s-different-port/serial/DeployApp 0.4
292 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.35
293 TestStartStop/group/default-k8s-different-port/serial/Stop 0.3
294 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.44
295 TestStartStop/group/default-k8s-different-port/serial/SecondStart 0.66
296 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 0.18
297 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 0.22
298 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.3
299 TestStartStop/group/default-k8s-different-port/serial/Pause 0.49
301 TestStartStop/group/newest-cni/serial/FirstStart 0.73
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.33
304 TestStartStop/group/newest-cni/serial/Stop 0.3
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
306 TestStartStop/group/newest-cni/serial/SecondStart 0.7
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
310 TestStartStop/group/newest-cni/serial/Pause 0.52
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestCertOptions (1.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220718020953-4043 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-20220718020953-4043 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: exit status 69 (499.535899ms)

                                                
                                                
-- stdout --
	* [cert-options-20220718020953-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-20220718020953-4043 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost" : exit status 69
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220718020953-4043 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-20220718020953-4043 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 85 (135.496597ms)

                                                
                                                
-- stdout --
	* Profile "cert-options-20220718020953-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20220718020953-4043"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-20220718020953-4043 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 85
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:82: failed to inspect container for the port get port 8555 for "cert-options-20220718020953-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20220718020953-4043: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error response from daemon: Bad response from Docker engine
cert_options_test.go:85: expected to get a non-zero forwarded port but got 0
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220718020953-4043 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-20220718020953-4043 -- "sudo cat /etc/kubernetes/admin.conf": exit status 85 (115.475232ms)

                                                
                                                
-- stdout --
	* Profile "cert-options-20220718020953-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20220718020953-4043"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-20220718020953-4043 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 85
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* Profile "cert-options-20220718020953-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20220718020953-4043"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2022-07-18 02:09:54.127286 -0700 PDT m=+2642.946050042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20220718020953-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-options-20220718020953-4043: exit status 1 (65.079794ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20220718020953-4043 -n cert-options-20220718020953-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20220718020953-4043 -n cert-options-20220718020953-4043: exit status 85 (115.21089ms)

                                                
                                                
-- stdout --
	* Profile "cert-options-20220718020953-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20220718020953-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "cert-options-20220718020953-4043" host is not running, skipping log retrieval (state="* Profile \"cert-options-20220718020953-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p cert-options-20220718020953-4043\"")
helpers_test.go:175: Cleaning up "cert-options-20220718020953-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220718020953-4043
--- FAIL: TestCertOptions (1.45s)

                                                
                                    
x
+
TestCertExpiration (181.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220718020911-4043 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20220718020911-4043 --memory=2048 --cert-expiration=3m --driver=docker : exit status 69 (516.089933ms)

                                                
                                                
-- stdout --
	* [cert-expiration-20220718020911-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-20220718020911-4043 --memory=2048 --cert-expiration=3m --driver=docker " : exit status 69

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220718020911-4043 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20220718020911-4043 --memory=2048 --cert-expiration=8760h --driver=docker : exit status 69 (479.883994ms)

                                                
                                                
-- stdout --
	* [cert-expiration-20220718020911-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-20220718020911-4043 --memory=2048 --cert-expiration=8760h --driver=docker " : exit status 69
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20220718020911-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2022-07-18 02:12:12.6713 -0700 PDT m=+2781.488402973
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20220718020911-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-expiration-20220718020911-4043: exit status 1 (66.51231ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20220718020911-4043 -n cert-expiration-20220718020911-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20220718020911-4043 -n cert-expiration-20220718020911-4043: exit status 85 (120.589169ms)

                                                
                                                
-- stdout --
	* Profile "cert-expiration-20220718020911-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-expiration-20220718020911-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "cert-expiration-20220718020911-4043" host is not running, skipping log retrieval (state="* Profile \"cert-expiration-20220718020911-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p cert-expiration-20220718020911-4043\"")
helpers_test.go:175: Cleaning up "cert-expiration-20220718020911-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220718020911-4043
--- FAIL: TestCertExpiration (181.66s)

                                                
                                    
x
+
TestDockerFlags (1.39s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220718020951-4043 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-20220718020951-4043 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 69 (518.517224ms)

                                                
                                                
-- stdout --
	* [docker-flags-20220718020951-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:51.968171   15621 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:51.968367   15621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:51.968374   15621 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:51.968380   15621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:51.968486   15621 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:51.969008   15621 out.go:303] Setting JSON to false
	I0718 02:09:51.983878   15621 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4164,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:51.983954   15621 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:52.005908   15621 out.go:177] * [docker-flags-20220718020951-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:52.048780   15621 notify.go:193] Checking for updates...
	I0718 02:09:52.070728   15621 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:52.092675   15621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:52.113740   15621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:52.135029   15621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:52.156960   15621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:52.179048   15621 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:52.245616   15621 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:52.266796   15621 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:52.309614   15621 start.go:284] selected driver: docker
	I0718 02:09:52.309642   15621 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:52.309671   15621 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:52.331811   15621 out.go:177] 
	W0718 02:09:52.353808   15621 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:52.353916   15621 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:52.375419   15621 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-20220718020951-4043 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 69
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 85 (117.42722ms)

                                                
                                                
-- stdout --
	* Profile "docker-flags-20220718020951-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p docker-flags-20220718020951-4043"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 85
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* Profile \"docker-flags-20220718020951-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20220718020951-4043\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* Profile \"docker-flags-20220718020951-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20220718020951-4043\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 85 (114.534578ms)

                                                
                                                
-- stdout --
	* Profile "docker-flags-20220718020951-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p docker-flags-20220718020951-4043"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 85
docker_test.go:67: expected "out/minikube-darwin-amd64 -p docker-flags-20220718020951-4043 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* Profile \"docker-flags-20220718020951-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20220718020951-4043\"\n"
panic.go:482: *** TestDockerFlags FAILED at 2022-07-18 02:09:52.671536 -0700 PDT m=+2641.490333412
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20220718020951-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect docker-flags-20220718020951-4043: exit status 1 (64.862093ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20220718020951-4043 -n docker-flags-20220718020951-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20220718020951-4043 -n docker-flags-20220718020951-4043: exit status 85 (119.263286ms)

                                                
                                                
-- stdout --
	* Profile "docker-flags-20220718020951-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p docker-flags-20220718020951-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "docker-flags-20220718020951-4043" host is not running, skipping log retrieval (state="* Profile \"docker-flags-20220718020951-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20220718020951-4043\"")
helpers_test.go:175: Cleaning up "docker-flags-20220718020951-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220718020951-4043
--- FAIL: TestDockerFlags (1.39s)

                                                
                                    
x
+
TestForceSystemdFlag (1.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220718020846-4043 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-20220718020846-4043 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 69 (499.727932ms)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20220718020846-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:08:46.500505   15199 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:08:46.500692   15199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:08:46.500697   15199 out.go:309] Setting ErrFile to fd 2...
	I0718 02:08:46.500701   15199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:08:46.500802   15199 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:08:46.501280   15199 out.go:303] Setting JSON to false
	I0718 02:08:46.516198   15199 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4099,"bootTime":1658131227,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:08:46.516294   15199 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:08:46.538014   15199 out.go:177] * [force-systemd-flag-20220718020846-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:08:46.581041   15199 notify.go:193] Checking for updates...
	I0718 02:08:46.602743   15199 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:08:46.623969   15199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:08:46.645915   15199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:08:46.668139   15199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:08:46.690076   15199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:08:46.712551   15199 config.go:178] Loaded profile config "running-upgrade-20220718020814-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0718 02:08:46.712618   15199 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:08:46.778697   15199 docker.go:113] docker version returned error: exit status 1
	I0718 02:08:46.800212   15199 out.go:177] * Using the docker driver based on user configuration
	I0718 02:08:46.842092   15199 start.go:284] selected driver: docker
	I0718 02:08:46.842118   15199 start.go:808] validating driver "docker" against <nil>
	I0718 02:08:46.842147   15199 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:08:46.864180   15199 out.go:177] 
	W0718 02:08:46.886456   15199 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:08:46.886587   15199 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:08:46.907739   15199 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-20220718020846-4043 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 69
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220718020846-4043 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-20220718020846-4043 ssh "docker info --format {{.CgroupDriver}}": exit status 85 (112.527691ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-flag-20220718020846-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-flag-20220718020846-4043"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-20220718020846-4043 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 85
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-07-18 02:08:47.065179 -0700 PDT m=+2575.895163013
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20220718020846-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-flag-20220718020846-4043: exit status 1 (68.247306ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20220718020846-4043 -n force-systemd-flag-20220718020846-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20220718020846-4043 -n force-systemd-flag-20220718020846-4043: exit status 85 (118.855083ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-flag-20220718020846-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-flag-20220718020846-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "force-systemd-flag-20220718020846-4043" host is not running, skipping log retrieval (state="* Profile \"force-systemd-flag-20220718020846-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p force-systemd-flag-20220718020846-4043\"")
helpers_test.go:175: Cleaning up "force-systemd-flag-20220718020846-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220718020846-4043
E0718 02:08:47.567455    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:47.573775    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:47.584096    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:47.605628    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:47.645918    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
--- FAIL: TestForceSystemdFlag (1.25s)

                                                
                                    
x
+
TestForceSystemdEnv (1.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220718020910-4043 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-20220718020910-4043 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 69 (582.331795ms)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220718020910-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:10.357005   15369 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:10.357139   15369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:10.357144   15369 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:10.357148   15369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:10.357245   15369 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:10.357726   15369 out.go:303] Setting JSON to false
	I0718 02:09:10.372729   15369 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4123,"bootTime":1658131227,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:10.372822   15369 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:10.395369   15369 out.go:177] * [force-systemd-env-20220718020910-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:10.437296   15369 notify.go:193] Checking for updates...
	I0718 02:09:10.438121   15369 preload.go:306] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I0718 02:09:10.459405   15369 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:10.502155   15369 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:10.502287   15369 preload.go:306] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I0718 02:09:10.550960   15369 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:10.571680   15369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:10.594115   15369 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:10.616036   15369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0718 02:09:10.637269   15369 config.go:178] Loaded profile config "running-upgrade-20220718020814-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0718 02:09:10.637329   15369 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:10.701549   15369 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:10.722578   15369 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:10.764211   15369 start.go:284] selected driver: docker
	I0718 02:09:10.764221   15369 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:10.764236   15369 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:10.785251   15369 out.go:177] 
	W0718 02:09:10.806340   15369 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:10.806417   15369 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:10.848403   15369 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-20220718020910-4043 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 69
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220718020910-4043 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-20220718020910-4043 ssh "docker info --format {{.CgroupDriver}}": exit status 85 (114.701338ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-env-20220718020910-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-env-20220718020910-4043"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-20220718020910-4043 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 85
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2022-07-18 02:09:11.007347 -0700 PDT m=+2599.829274920
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20220718020910-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-env-20220718020910-4043: exit status 1 (65.912329ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20220718020910-4043 -n force-systemd-env-20220718020910-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20220718020910-4043 -n force-systemd-env-20220718020910-4043: exit status 85 (114.437709ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-env-20220718020910-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-env-20220718020910-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "force-systemd-env-20220718020910-4043" host is not running, skipping log retrieval (state="* Profile \"force-systemd-env-20220718020910-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p force-systemd-env-20220718020910-4043\"")
helpers_test.go:175: Cleaning up "force-systemd-env-20220718020910-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220718020910-4043
--- FAIL: TestForceSystemdEnv (1.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (253.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220718013653-4043 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0718 01:37:57.486249    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:40:13.629922    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:40:29.621759    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:29.628254    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:29.640542    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:29.662844    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:29.705109    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:29.786815    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:29.948335    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:30.268885    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:30.909945    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:32.192136    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:34.753490    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:39.873709    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:40:41.325698    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:40:50.116049    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220718013653-4043 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m13.502937978s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220718013653-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220718013653-4043 in cluster ingress-addon-legacy-20220718013653-4043
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:36:53.079394    8222 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:36:53.079555    8222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:36:53.079561    8222 out.go:309] Setting ErrFile to fd 2...
	I0718 01:36:53.079565    8222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:36:53.080238    8222 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:36:53.081110    8222 out.go:303] Setting JSON to false
	I0718 01:36:53.096574    8222 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2186,"bootTime":1658131227,"procs":372,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 01:36:53.096689    8222 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 01:36:53.119133    8222 out.go:177] * [ingress-addon-legacy-20220718013653-4043] minikube v1.26.0 on Darwin 12.4
	I0718 01:36:53.162074    8222 notify.go:193] Checking for updates...
	I0718 01:36:53.183761    8222 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 01:36:53.204576    8222 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 01:36:53.226004    8222 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 01:36:53.248093    8222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 01:36:53.269855    8222 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 01:36:53.292318    8222 driver.go:360] Setting default libvirt URI to qemu:///system
	I0718 01:36:53.366083    8222 docker.go:137] docker version: linux-20.10.17
	I0718 01:36:53.366220    8222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:36:53.500797    8222 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-18 08:36:53.438584269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:36:53.544626    8222 out.go:177] * Using the docker driver based on user configuration
	I0718 01:36:53.566635    8222 start.go:284] selected driver: docker
	I0718 01:36:53.566668    8222 start.go:808] validating driver "docker" against <nil>
	I0718 01:36:53.566695    8222 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 01:36:53.570105    8222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:36:53.704587    8222 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-18 08:36:53.643163205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:36:53.704772    8222 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0718 01:36:53.704927    8222 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 01:36:53.726644    8222 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 01:36:53.747499    8222 cni.go:95] Creating CNI manager for ""
	I0718 01:36:53.747531    8222 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 01:36:53.747552    8222 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220718013653-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220718013653-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:36:53.769781    8222 out.go:177] * Starting control plane node ingress-addon-legacy-20220718013653-4043 in cluster ingress-addon-legacy-20220718013653-4043
	I0718 01:36:53.791563    8222 cache.go:120] Beginning downloading kic base image for docker with docker
	I0718 01:36:53.813646    8222 out.go:177] * Pulling base image ...
	I0718 01:36:53.857900    8222 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0718 01:36:53.857903    8222 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0718 01:36:53.927336    8222 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0718 01:36:53.927357    8222 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0718 01:36:53.930023    8222 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0718 01:36:53.930043    8222 cache.go:57] Caching tarball of preloaded images
	I0718 01:36:53.930275    8222 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0718 01:36:53.974368    8222 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0718 01:36:53.996463    8222 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0718 01:36:54.096517    8222 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0718 01:36:56.368670    8222 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0718 01:36:56.368861    8222 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0718 01:36:56.988959    8222 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0718 01:36:56.989188    8222 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/config.json ...
	I0718 01:36:56.989209    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/config.json: {Name:mk406ab0065586335181af37546df517048b9626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:36:56.989495    8222 cache.go:208] Successfully downloaded all kic artifacts
	I0718 01:36:56.989521    8222 start.go:352] acquiring machines lock for ingress-addon-legacy-20220718013653-4043: {Name:mkf3d925a91af4459f1e39d9dddf2ee645bba75f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:36:56.989646    8222 start.go:356] acquired machines lock for "ingress-addon-legacy-20220718013653-4043" in 117.788µs
	I0718 01:36:56.989667    8222 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220718013653-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202207180
13653-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 01:36:56.989776    8222 start.go:131] createHost starting for "" (driver="docker")
	I0718 01:36:57.038536    8222 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0718 01:36:57.038889    8222 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220718013653-4043" (driver="docker")
	I0718 01:36:57.038933    8222 client.go:168] LocalClient.Create starting
	I0718 01:36:57.039080    8222 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem
	I0718 01:36:57.039146    8222 main.go:134] libmachine: Decoding PEM data...
	I0718 01:36:57.039175    8222 main.go:134] libmachine: Parsing certificate...
	I0718 01:36:57.039314    8222 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem
	I0718 01:36:57.039387    8222 main.go:134] libmachine: Decoding PEM data...
	I0718 01:36:57.039407    8222 main.go:134] libmachine: Parsing certificate...
	I0718 01:36:57.040206    8222 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220718013653-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 01:36:57.107181    8222 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220718013653-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 01:36:57.107358    8222 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220718013653-4043] to gather additional debugging logs...
	I0718 01:36:57.107381    8222 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220718013653-4043
	W0718 01:36:57.171091    8222 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220718013653-4043 returned with exit code 1
	I0718 01:36:57.171132    8222 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220718013653-4043]: docker network inspect ingress-addon-legacy-20220718013653-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220718013653-4043
	I0718 01:36:57.171170    8222 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220718013653-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220718013653-4043
	
	** /stderr **
	I0718 01:36:57.171253    8222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 01:36:57.235549    8222 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c16160] misses:0}
	I0718 01:36:57.235583    8222 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 01:36:57.235599    8222 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220718013653-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0718 01:36:57.235684    8222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220718013653-4043 ingress-addon-legacy-20220718013653-4043
	I0718 01:36:57.334447    8222 network_create.go:99] docker network ingress-addon-legacy-20220718013653-4043 192.168.49.0/24 created
	I0718 01:36:57.334485    8222 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220718013653-4043" container
	I0718 01:36:57.334578    8222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 01:36:57.398772    8222 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220718013653-4043 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220718013653-4043 --label created_by.minikube.sigs.k8s.io=true
	I0718 01:36:57.462926    8222 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220718013653-4043
	I0718 01:36:57.463063    8222 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220718013653-4043-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220718013653-4043 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220718013653-4043:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0718 01:36:57.921113    8222 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220718013653-4043
	I0718 01:36:57.921196    8222 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0718 01:36:57.921211    8222 kic.go:179] Starting extracting preloaded images to volume ...
	I0718 01:36:57.921291    8222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220718013653-4043:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 01:37:02.418610    8222 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220718013653-4043:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.497119753s)
	I0718 01:37:02.418636    8222 kic.go:188] duration metric: took 4.497399 seconds to extract preloaded images to volume
	I0718 01:37:02.418770    8222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0718 01:37:02.554492    8222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220718013653-4043 --name ingress-addon-legacy-20220718013653-4043 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220718013653-4043 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220718013653-4043 --network ingress-addon-legacy-20220718013653-4043 --ip 192.168.49.2 --volume ingress-addon-legacy-20220718013653-4043:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0718 01:37:02.931060    8222 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220718013653-4043 --format={{.State.Running}}
	I0718 01:37:03.004364    8222 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220718013653-4043 --format={{.State.Status}}
	I0718 01:37:03.080734    8222 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220718013653-4043 stat /var/lib/dpkg/alternatives/iptables
	I0718 01:37:03.238937    8222 oci.go:144] the created container "ingress-addon-legacy-20220718013653-4043" has a running status.
	I0718 01:37:03.238976    8222 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa...
	I0718 01:37:03.414061    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0718 01:37:03.414119    8222 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0718 01:37:03.529126    8222 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220718013653-4043 --format={{.State.Status}}
	I0718 01:37:03.601037    8222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0718 01:37:03.601468    8222 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220718013653-4043 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0718 01:37:03.724790    8222 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220718013653-4043 --format={{.State.Status}}
	I0718 01:37:03.793687    8222 machine.go:88] provisioning docker machine ...
	I0718 01:37:03.794109    8222 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220718013653-4043"
	I0718 01:37:03.794209    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:03.863656    8222 main.go:134] libmachine: Using SSH client type: native
	I0718 01:37:03.863858    8222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51803 <nil> <nil>}
	I0718 01:37:03.863873    8222 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220718013653-4043 && echo "ingress-addon-legacy-20220718013653-4043" | sudo tee /etc/hostname
	I0718 01:37:03.991810    8222 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220718013653-4043
	
	I0718 01:37:03.991907    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:04.061604    8222 main.go:134] libmachine: Using SSH client type: native
	I0718 01:37:04.062224    8222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51803 <nil> <nil>}
	I0718 01:37:04.062240    8222 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220718013653-4043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220718013653-4043/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220718013653-4043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 01:37:04.185618    8222 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0718 01:37:04.185643    8222 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube}
	I0718 01:37:04.185667    8222 ubuntu.go:177] setting up certificates
	I0718 01:37:04.185680    8222 provision.go:83] configureAuth start
	I0718 01:37:04.185755    8222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:04.254866    8222 provision.go:138] copyHostCerts
	I0718 01:37:04.254903    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem
	I0718 01:37:04.254967    8222 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem, removing ...
	I0718 01:37:04.254978    8222 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem
	I0718 01:37:04.255098    8222 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem (1078 bytes)
	I0718 01:37:04.255256    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem
	I0718 01:37:04.255286    8222 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem, removing ...
	I0718 01:37:04.255312    8222 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem
	I0718 01:37:04.255376    8222 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem (1123 bytes)
	I0718 01:37:04.255503    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem
	I0718 01:37:04.255537    8222 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem, removing ...
	I0718 01:37:04.255542    8222 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem
	I0718 01:37:04.255602    8222 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem (1675 bytes)
	I0718 01:37:04.255719    8222 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220718013653-4043 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220718013653-4043]
	I0718 01:37:04.363167    8222 provision.go:172] copyRemoteCerts
	I0718 01:37:04.363223    8222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 01:37:04.363280    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:04.432126    8222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:37:04.520069    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 01:37:04.520214    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0718 01:37:04.537152    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 01:37:04.537228    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0718 01:37:04.554187    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 01:37:04.554291    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 01:37:04.570404    8222 provision.go:86] duration metric: configureAuth took 384.710541ms
	I0718 01:37:04.570415    8222 ubuntu.go:193] setting minikube options for container-runtime
	I0718 01:37:04.570560    8222 config.go:178] Loaded profile config "ingress-addon-legacy-20220718013653-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0718 01:37:04.570614    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:04.639911    8222 main.go:134] libmachine: Using SSH client type: native
	I0718 01:37:04.641109    8222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51803 <nil> <nil>}
	I0718 01:37:04.641125    8222 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 01:37:04.761841    8222 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0718 01:37:04.761856    8222 ubuntu.go:71] root file system type: overlay
	I0718 01:37:04.762041    8222 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 01:37:04.762115    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:04.831674    8222 main.go:134] libmachine: Using SSH client type: native
	I0718 01:37:04.831821    8222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51803 <nil> <nil>}
	I0718 01:37:04.831876    8222 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 01:37:04.964762    8222 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 01:37:04.964890    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:05.035429    8222 main.go:134] libmachine: Using SSH client type: native
	I0718 01:37:05.035598    8222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51803 <nil> <nil>}
	I0718 01:37:05.035612    8222 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 01:37:05.623710    8222 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-18 08:37:04.962621029 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0718 01:37:05.623734    8222 machine.go:91] provisioned docker machine in 1.829640824s
	I0718 01:37:05.623740    8222 client.go:171] LocalClient.Create took 8.584751857s
	I0718 01:37:05.623756    8222 start.go:173] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220718013653-4043" took 8.584820507s
	I0718 01:37:05.623763    8222 start.go:306] post-start starting for "ingress-addon-legacy-20220718013653-4043" (driver="docker")
	I0718 01:37:05.623767    8222 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 01:37:05.623835    8222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 01:37:05.623890    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:05.695095    8222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:37:05.783102    8222 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 01:37:05.786688    8222 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 01:37:05.786704    8222 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 01:37:05.786714    8222 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 01:37:05.786720    8222 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0718 01:37:05.786729    8222 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/addons for local assets ...
	I0718 01:37:05.786825    8222 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files for local assets ...
	I0718 01:37:05.786994    8222 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem -> 40432.pem in /etc/ssl/certs
	I0718 01:37:05.787000    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem -> /etc/ssl/certs/40432.pem
	I0718 01:37:05.787158    8222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 01:37:05.794104    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem --> /etc/ssl/certs/40432.pem (1708 bytes)
	I0718 01:37:05.811089    8222 start.go:309] post-start completed in 187.316732ms
	I0718 01:37:05.811615    8222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:05.881318    8222 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/config.json ...
	I0718 01:37:05.881809    8222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 01:37:05.881861    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:05.950792    8222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:37:06.035778    8222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 01:37:06.040055    8222 start.go:134] duration metric: createHost completed in 9.050201805s
	I0718 01:37:06.040070    8222 start.go:81] releasing machines lock for "ingress-addon-legacy-20220718013653-4043", held for 9.050364966s
	I0718 01:37:06.040146    8222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:06.109399    8222 ssh_runner.go:195] Run: systemctl --version
	I0718 01:37:06.109437    8222 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0718 01:37:06.109465    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:06.109503    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:06.183360    8222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:37:06.185323    8222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:37:06.269884    8222 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 01:37:06.737104    8222 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0718 01:37:06.737174    8222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 01:37:06.746622    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 01:37:06.758757    8222 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 01:37:06.833148    8222 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 01:37:06.903501    8222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 01:37:06.973062    8222 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 01:37:07.175639    8222 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 01:37:07.212563    8222 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 01:37:07.293890    8222 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0718 01:37:07.294088    8222 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220718013653-4043 dig +short host.docker.internal
	I0718 01:37:07.422097    8222 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0718 01:37:07.422220    8222 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0718 01:37:07.426682    8222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 01:37:07.436682    8222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:37:07.505679    8222 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0718 01:37:07.505762    8222 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 01:37:07.535958    8222 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0718 01:37:07.535973    8222 docker.go:533] Images already preloaded, skipping extraction
	I0718 01:37:07.536046    8222 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 01:37:07.565620    8222 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0718 01:37:07.565638    8222 cache_images.go:84] Images are preloaded, skipping loading
	I0718 01:37:07.565717    8222 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 01:37:07.637455    8222 cni.go:95] Creating CNI manager for ""
	I0718 01:37:07.637466    8222 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 01:37:07.637478    8222 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0718 01:37:07.637494    8222 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220718013653-4043 NodeName:ingress-addon-legacy-20220718013653-4043 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:syst
emd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0718 01:37:07.637730    8222 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220718013653-4043"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 01:37:07.637810    8222 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220718013653-4043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220718013653-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0718 01:37:07.637866    8222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0718 01:37:07.646067    8222 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 01:37:07.646144    8222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 01:37:07.653461    8222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0718 01:37:07.666281    8222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0718 01:37:07.678654    8222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
	I0718 01:37:07.692520    8222 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0718 01:37:07.696526    8222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 01:37:07.705927    8222 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043 for IP: 192.168.49.2
	I0718 01:37:07.706028    8222 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.key
	I0718 01:37:07.706082    8222 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.key
	I0718 01:37:07.706123    8222 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/client.key
	I0718 01:37:07.706136    8222 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/client.crt with IP's: []
	I0718 01:37:07.829552    8222 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/client.crt ...
	I0718 01:37:07.829566    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/client.crt: {Name:mk23bc4661d2e5ff5cc2903f3b0a6ca33cbb4019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:37:07.829890    8222 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/client.key ...
	I0718 01:37:07.829898    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/client.key: {Name:mke9141fc5f91b9d6a2d197b765ec1cd0a4ddb2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:37:07.830106    8222 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key.dd3b5fb2
	I0718 01:37:07.830128    8222 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0718 01:37:07.878882    8222 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt.dd3b5fb2 ...
	I0718 01:37:07.878895    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt.dd3b5fb2: {Name:mk45cd49a1397285067d1f7fff201419ec4ebbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:37:07.879183    8222 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key.dd3b5fb2 ...
	I0718 01:37:07.879193    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key.dd3b5fb2: {Name:mkb1266db052e07b0edfac2f567868e9a91d9ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:37:07.879379    8222 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt
	I0718 01:37:07.879526    8222 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key
	I0718 01:37:07.879668    8222 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.key
	I0718 01:37:07.879684    8222 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.crt with IP's: []
	I0718 01:37:07.998617    8222 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.crt ...
	I0718 01:37:07.998626    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.crt: {Name:mk37043cdcd0ba7e097edb8c2a75f979008e6132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:37:07.998875    8222 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.key ...
	I0718 01:37:07.998885    8222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.key: {Name:mka2e3da9267e7bd1fd5f4a5459fb8589cc3215d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:37:07.999079    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 01:37:07.999114    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 01:37:07.999134    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 01:37:07.999157    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 01:37:07.999177    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 01:37:07.999194    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 01:37:07.999212    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 01:37:07.999232    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 01:37:07.999342    8222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043.pem (1338 bytes)
	W0718 01:37:07.999380    8222 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043_empty.pem, impossibly tiny 0 bytes
	I0718 01:37:07.999389    8222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 01:37:07.999418    8222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem (1078 bytes)
	I0718 01:37:07.999451    8222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem (1123 bytes)
	I0718 01:37:07.999479    8222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem (1675 bytes)
	I0718 01:37:07.999544    8222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem (1708 bytes)
	I0718 01:37:07.999586    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem -> /usr/share/ca-certificates/40432.pem
	I0718 01:37:07.999605    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:37:07.999621    8222 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043.pem -> /usr/share/ca-certificates/4043.pem
	I0718 01:37:08.000096    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0718 01:37:08.018513    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 01:37:08.036382    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 01:37:08.053460    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/ingress-addon-legacy-20220718013653-4043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0718 01:37:08.071618    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 01:37:08.087981    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 01:37:08.118382    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 01:37:08.135422    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 01:37:08.152351    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem --> /usr/share/ca-certificates/40432.pem (1708 bytes)
	I0718 01:37:08.169323    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 01:37:08.186326    8222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043.pem --> /usr/share/ca-certificates/4043.pem (1338 bytes)
	I0718 01:37:08.204118    8222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 01:37:08.216829    8222 ssh_runner.go:195] Run: openssl version
	I0718 01:37:08.221945    8222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40432.pem && ln -fs /usr/share/ca-certificates/40432.pem /etc/ssl/certs/40432.pem"
	I0718 01:37:08.229514    8222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40432.pem
	I0718 01:37:08.233430    8222 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 18 08:32 /usr/share/ca-certificates/40432.pem
	I0718 01:37:08.233472    8222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40432.pem
	I0718 01:37:08.238433    8222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40432.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 01:37:08.245673    8222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 01:37:08.253225    8222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:37:08.257114    8222 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 18 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:37:08.257150    8222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:37:08.261854    8222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 01:37:08.269357    8222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4043.pem && ln -fs /usr/share/ca-certificates/4043.pem /etc/ssl/certs/4043.pem"
	I0718 01:37:08.276889    8222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4043.pem
	I0718 01:37:08.280590    8222 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 18 08:32 /usr/share/ca-certificates/4043.pem
	I0718 01:37:08.280628    8222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4043.pem
	I0718 01:37:08.285710    8222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4043.pem /etc/ssl/certs/51391683.0"
	I0718 01:37:08.293171    8222 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220718013653-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220718013653-4043 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:37:08.293280    8222 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 01:37:08.321383    8222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 01:37:08.328883    8222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 01:37:08.335847    8222 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0718 01:37:08.335920    8222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 01:37:08.342925    8222 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 01:37:08.342949    8222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0718 01:37:09.075723    8222 out.go:204]   - Generating certificates and keys ...
	I0718 01:37:12.086250    8222 out.go:204]   - Booting up control plane ...
	W0718 01:39:06.999406    8222 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220718013653-4043 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220718013653-4043 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:37:08.389830     952 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:37:12.075368     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:37:12.076620     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220718013653-4043 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220718013653-4043 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:37:08.389830     952 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:37:12.075368     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:37:12.076620     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0718 01:39:06.999440    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0718 01:39:07.420153    8222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 01:39:07.429282    8222 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0718 01:39:07.429329    8222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 01:39:07.436500    8222 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 01:39:07.436534    8222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0718 01:39:08.116648    8222 out.go:204]   - Generating certificates and keys ...
	I0718 01:39:08.966650    8222 out.go:204]   - Booting up control plane ...
	I0718 01:41:03.884661    8222 kubeadm.go:397] StartCluster complete in 3m55.59409198s
	I0718 01:41:03.884737    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 01:41:03.913920    8222 logs.go:274] 0 containers: []
	W0718 01:41:03.913934    8222 logs.go:276] No container was found matching "kube-apiserver"
	I0718 01:41:03.913992    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 01:41:03.943058    8222 logs.go:274] 0 containers: []
	W0718 01:41:03.943072    8222 logs.go:276] No container was found matching "etcd"
	I0718 01:41:03.943129    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 01:41:03.971954    8222 logs.go:274] 0 containers: []
	W0718 01:41:03.971968    8222 logs.go:276] No container was found matching "coredns"
	I0718 01:41:03.972032    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 01:41:04.000872    8222 logs.go:274] 0 containers: []
	W0718 01:41:04.000884    8222 logs.go:276] No container was found matching "kube-scheduler"
	I0718 01:41:04.000944    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 01:41:04.029870    8222 logs.go:274] 0 containers: []
	W0718 01:41:04.029902    8222 logs.go:276] No container was found matching "kube-proxy"
	I0718 01:41:04.029997    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0718 01:41:04.058504    8222 logs.go:274] 0 containers: []
	W0718 01:41:04.058517    8222 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0718 01:41:04.058575    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 01:41:04.087973    8222 logs.go:274] 0 containers: []
	W0718 01:41:04.087985    8222 logs.go:276] No container was found matching "storage-provisioner"
	I0718 01:41:04.088044    8222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 01:41:04.116750    8222 logs.go:274] 0 containers: []
	W0718 01:41:04.116763    8222 logs.go:276] No container was found matching "kube-controller-manager"
	I0718 01:41:04.116770    8222 logs.go:123] Gathering logs for kubelet ...
	I0718 01:41:04.116778    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 01:41:04.158339    8222 logs.go:123] Gathering logs for dmesg ...
	I0718 01:41:04.158354    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 01:41:04.170843    8222 logs.go:123] Gathering logs for describe nodes ...
	I0718 01:41:04.170863    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0718 01:41:04.222689    8222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0718 01:41:04.222700    8222 logs.go:123] Gathering logs for Docker ...
	I0718 01:41:04.222711    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0718 01:41:04.237715    8222 logs.go:123] Gathering logs for container status ...
	I0718 01:41:04.237728    8222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 01:41:06.290703    8222 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052951114s)
	W0718 01:41:06.290825    8222 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:39:07.484063    3434 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:39:08.952591    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:39:08.953405    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0718 01:41:06.290841    8222 out.go:239] * 
	* 
	W0718 01:41:06.290965    8222 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:39:07.484063    3434 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:39:08.952591    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:39:08.953405    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:39:07.484063    3434 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:39:08.952591    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:39:08.953405    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0718 01:41:06.290982    8222 out.go:239] * 
	* 
	W0718 01:41:06.291552    8222 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 01:41:06.356370    8222 out.go:177] 
	W0718 01:41:06.421450    8222 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:39:07.484063    3434 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:39:08.952591    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:39:08.953405    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0718 08:39:07.484063    3434 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:39:08.952591    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:39:08.953405    3434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0718 01:41:06.421634    8222 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0718 01:41:06.421795    8222 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0718 01:41:06.465556    8222 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220718013653-4043 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (253.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220718013653-4043 addons enable ingress --alsologtostderr -v=5
E0718 01:41:10.596393    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 01:41:51.557827    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220718013653-4043 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.078489964s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:41:06.608125    8572 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:41:06.608392    8572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:41:06.608397    8572 out.go:309] Setting ErrFile to fd 2...
	I0718 01:41:06.608401    8572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:41:06.608508    8572 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:41:06.609197    8572 config.go:178] Loaded profile config "ingress-addon-legacy-20220718013653-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0718 01:41:06.609212    8572 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220718013653-4043"
	I0718 01:41:06.609219    8572 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220718013653-4043"
	I0718 01:41:06.609432    8572 host.go:66] Checking if "ingress-addon-legacy-20220718013653-4043" exists ...
	I0718 01:41:06.609893    8572 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220718013653-4043 --format={{.State.Status}}
	I0718 01:41:06.698871    8572 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0718 01:41:06.721442    8572 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0718 01:41:06.743084    8572 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0718 01:41:06.764490    8572 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0718 01:41:06.786058    8572 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0718 01:41:06.786097    8572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0718 01:41:06.786228    8572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:41:06.856779    8572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:41:06.949753    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:06.999567    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:06.999586    8572 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:07.276245    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:07.326979    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:07.326998    8572 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:07.869502    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:07.924137    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:07.924152    8572 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:08.581538    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:08.633921    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:08.633937    8572 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:09.425386    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:09.475838    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:09.475855    8572 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:10.646538    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:10.696531    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:10.696545    8572 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:12.949910    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:13.000760    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:13.000775    8572 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:14.613864    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:14.664486    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:14.664505    8572 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:17.471133    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:17.522698    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:17.522716    8572 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:21.347896    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:21.399284    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:21.399300    8572 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:29.099050    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:29.152314    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:29.152329    8572 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:43.790231    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:41:43.841693    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:41:43.841707    8572 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:12.248882    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:42:12.300464    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:12.300479    8572 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:35.471184    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0718 01:42:35.522036    8572 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:35.522065    8572 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220718013653-4043"
	I0718 01:42:35.543782    8572 out.go:177] * Verifying ingress addon...
	I0718 01:42:35.566816    8572 out.go:177] 
	W0718 01:42:35.587873    8572 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220718013653-4043" does not exist: client config: context "ingress-addon-legacy-20220718013653-4043" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220718013653-4043" does not exist: client config: context "ingress-addon-legacy-20220718013653-4043" does not exist]
	W0718 01:42:35.587905    8572 out.go:239] * 
	* 
	W0718 01:42:35.590962    8572 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 01:42:35.612512    8572 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220718013653-4043
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220718013653-4043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4",
	        "Created": "2022-07-18T08:37:02.640921336Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36390,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-18T08:37:02.941290261Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/hosts",
	        "LogPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4-json.log",
	        "Name": "/ingress-addon-legacy-20220718013653-4043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220718013653-4043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220718013653-4043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3-init/diff:/var/lib/docker/overlay2/0155a28c1e691808bc7254363e6dbbca6bc736daa4a53efd06256136b9ccffc8/diff:/var/lib/docker/overlay2/785390ccfbe02ea2164ea7e4302ae44e311173f76acb63eabfcd4d68015d6e52/diff:/var/lib/docker/overlay2/df96474ebe21bf6fbcd3bf91d41d4194dd9fd81f0094fb1d72f6fda01994c351/diff:/var/lib/docker/overlay2/f4dc7db8eacf000538efa6fa8558bdcb747d4066e51ec1c358a773c2e09271a7/diff:/var/lib/docker/overlay2/aa4c8b0ec96277efded678498713e53c1b70a751b6fd7dc7ccee9a6e05b5b3f8/diff:/var/lib/docker/overlay2/cb4c669639025cf7733d34334313a090f346c95738fc907fb710fed890639f21/diff:/var/lib/docker/overlay2/07a024b847b7aac0978eb44222f9d3712dbc48d8cec8c6625855545a2c7ae448/diff:/var/lib/docker/overlay2/c0e7b154b472a3a21ee8d2f02c69d7b7923e50406f7f70062e6056026f200dc8/diff:/var/lib/docker/overlay2/67cf95a091bedd6dd0dbbd8c25178898a0e2b02be83c46fc6c2f8a1c2f02674c/diff:/var/lib/docker/overlay2/db72c6
5c7d673f0864a2c0a5dc96d808e37979b3bd687e68158a2d8b5f117825/diff:/var/lib/docker/overlay2/afa7c8c68e434d7b0de4251dc5611f8df1982d005845aac4e890a7763846c981/diff:/var/lib/docker/overlay2/fa6a8262350ae704a34d604b280f0219188f238c96c1c00402284867c62dac9b/diff:/var/lib/docker/overlay2/b4ca49622151ae6f59da73489bc287799c862b6ea5f501d50e1f5568054c19de/diff:/var/lib/docker/overlay2/f3031d98fba997baca831f6207d3037f01c7da3fa2ac76f99bc611d4168ee33d/diff:/var/lib/docker/overlay2/7e20f07fbf4fc050782fe533a5c0e929f5fc08e1bc1494470668b169763c14e1/diff:/var/lib/docker/overlay2/108580e5a81cfc53fb04e9d6b36ce60b75043b29247cc4d6cc19eb9b81647a00/diff:/var/lib/docker/overlay2/a9f59dd68b496ab360f729d54241666d174ab55664869e67a8872b60cef5ca12/diff:/var/lib/docker/overlay2/4df5325a696a9b14fda42b1aeecb02bc27cb1e67dfbfe21aebd6b8eed36b9e3f/diff:/var/lib/docker/overlay2/6dfcf99d0b9d662dcfca574f61cf73ee8594d9e744e5ae49a7f55923b03a2c3a/diff:/var/lib/docker/overlay2/788b405568bc01d169062393e0dc6283cf0059ff9c4d262121f5548e46e68538/diff:/var/lib/d
ocker/overlay2/5c97b209193d33b13a50d1687185e6cd6af95fc2ebc75386ff80276b8197dd1e/diff:/var/lib/docker/overlay2/da440649ee72d4860bc5e559781b8d7873edbb45b2c6f37e82dc24f079f83e0c/diff:/var/lib/docker/overlay2/7016f4daa0c096e4141802b7222e3b4a2b05adb7d8cd21ea4578ebfa5cbae6a6/diff:/var/lib/docker/overlay2/ccd68a33cfb3faead1e5b4385b11360c5b56778be7cdbe4efa2227562e8ddcb1/diff:/var/lib/docker/overlay2/74545a493ce056cee52ab09c3b4f220df28d765423d6b46ea239beb8dc5db2ef/diff:/var/lib/docker/overlay2/753aa6aaf840b5186887dd205ebd62e8710d5ccabe5170548680c6e559445c2a/diff:/var/lib/docker/overlay2/791a458f173b8bb0dcdb9d18488941b8cf19c4cb83afb22d0a1bacc9675a7654/diff:/var/lib/docker/overlay2/5458881ce1af74e401eb3a10606457d34825e34d90ef078277b1d964e7edb783/diff:/var/lib/docker/overlay2/03176f6e10f98e9bb8d69fb37b851b04cebee2c2ba458ba838dd363a0315cbab/diff:/var/lib/docker/overlay2/d27ebe6d556402a77e23f3b194246c9a208d71be67f92bcc1ce6604a32fe721d/diff:/var/lib/docker/overlay2/a0ceb7b63b2bc5cda2cc5445514898b332ca9cdcad2f73fa2035bca40e4
eaeae/diff:/var/lib/docker/overlay2/a7ec7247df2102087f04843233e5ba5cde0c4b60d27fb0569d4ec464928c509f/diff:/var/lib/docker/overlay2/8e48faefb8da020dbe9ffb682c540ffba4471404365c81b535f9dede181ff881/diff:/var/lib/docker/overlay2/e8dd2220075dfa1cdc1cf293daa1451d0a290c5d44378fbc7baecd5b67e12ef2/diff:/var/lib/docker/overlay2/cd999289e9e588853eb66d5862d1187afd3ace57b8f7b499ce99e6c9187d5543/diff:/var/lib/docker/overlay2/110bae2d3fa1ae298f0583dd243b411b79d8c104e55efd5dd4815c308b0b3208/diff:/var/lib/docker/overlay2/a667174986afec4ae0097f61dba34c02ba97ff6da900874c7b6c9276d2907fa4/diff:/var/lib/docker/overlay2/51327ffe92a17b372d59dbcd6875765f88c1ba0c4a6690d2f70c100d5201a353/diff:/var/lib/docker/overlay2/83b73e1c1aa1081d71e6f2c9710707bd523127816a44f4417a384d4b4e619fbb/diff:/var/lib/docker/overlay2/81486e42af37b860bd9c67a17c8f61366893b7477e9ec207373ec068cfd5e93f/diff:/var/lib/docker/overlay2/270d94a7876a2998769ca7a5234ebae1b59a1723fa38b22080253eed3ef983e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220718013653-4043",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220718013653-4043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220718013653-4043",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220718013653-4043",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220718013653-4043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4428f5b78e178aab500ec9eedba08f2da794cf2f0f5539c0505dc2ff5968e49f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51804"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51805"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51807"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4428f5b78e17",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220718013653-4043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1916bf53452",
	                        "ingress-addon-legacy-20220718013653-4043"
	                    ],
	                    "NetworkID": "708946da9e31a487d9f5be16ad672686b5da8fd0bbeded04c042a72d1745bd29",
	                    "EndpointID": "5fb40dfca82a0057f884935e69641935f592cee4b04d5f7d6869f448da780326",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220718013653-4043 -n ingress-addon-legacy-20220718013653-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220718013653-4043 -n ingress-addon-legacy-20220718013653-4043: exit status 6 (435.200558ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 01:42:36.131464    8674 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220718013653-4043" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220718013653-4043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220718013653-4043 addons enable ingress-dns --alsologtostderr -v=5
E0718 01:43:13.478449    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220718013653-4043 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.027874112s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:42:36.190624    8684 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:42:36.190967    8684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:42:36.190972    8684 out.go:309] Setting ErrFile to fd 2...
	I0718 01:42:36.190976    8684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:42:36.191080    8684 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:42:36.191664    8684 config.go:178] Loaded profile config "ingress-addon-legacy-20220718013653-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0718 01:42:36.191679    8684 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220718013653-4043"
	I0718 01:42:36.191686    8684 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220718013653-4043"
	I0718 01:42:36.191909    8684 host.go:66] Checking if "ingress-addon-legacy-20220718013653-4043" exists ...
	I0718 01:42:36.192424    8684 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220718013653-4043 --format={{.State.Status}}
	I0718 01:42:36.281549    8684 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0718 01:42:36.303742    8684 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0718 01:42:36.325362    8684 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0718 01:42:36.325401    8684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0718 01:42:36.325620    8684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220718013653-4043
	I0718 01:42:36.393873    8684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/ingress-addon-legacy-20220718013653-4043/id_rsa Username:docker}
	I0718 01:42:36.486968    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:36.535349    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:36.535367    8684 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:36.811668    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:36.866528    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:36.866543    8684 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:37.409076    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:37.459109    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:37.459126    8684 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:38.116477    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:38.166839    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:38.166856    8684 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:38.958682    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:39.012707    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:39.012722    8684 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:40.185222    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:40.240952    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:40.240972    8684 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:42.496466    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:42.548839    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:42.548856    8684 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:44.161918    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:44.213795    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:44.213810    8684 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:47.020486    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:47.071671    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:47.071685    8684 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:50.898925    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:50.950437    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:50.950452    8684 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:58.648449    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:42:58.699703    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:42:58.699717    8684 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:43:13.336663    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:43:13.391664    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:43:13.391684    8684 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:43:41.800843    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:43:41.852133    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:43:41.852147    8684 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:44:05.022679    8684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0718 01:44:05.073131    8684 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0718 01:44:05.097171    8684 out.go:177] 
	W0718 01:44:05.118215    8684 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0718 01:44:05.118238    8684 out.go:239] * 
	* 
	W0718 01:44:05.121289    8684 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 01:44:05.146773    8684 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220718013653-4043
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220718013653-4043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4",
	        "Created": "2022-07-18T08:37:02.640921336Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36390,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-18T08:37:02.941290261Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/hosts",
	        "LogPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4-json.log",
	        "Name": "/ingress-addon-legacy-20220718013653-4043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220718013653-4043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220718013653-4043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3-init/diff:/var/lib/docker/overlay2/0155a28c1e691808bc7254363e6dbbca6bc736daa4a53efd06256136b9ccffc8/diff:/var/lib/docker/overlay2/785390ccfbe02ea2164ea7e4302ae44e311173f76acb63eabfcd4d68015d6e52/diff:/var/lib/docker/overlay2/df96474ebe21bf6fbcd3bf91d41d4194dd9fd81f0094fb1d72f6fda01994c351/diff:/var/lib/docker/overlay2/f4dc7db8eacf000538efa6fa8558bdcb747d4066e51ec1c358a773c2e09271a7/diff:/var/lib/docker/overlay2/aa4c8b0ec96277efded678498713e53c1b70a751b6fd7dc7ccee9a6e05b5b3f8/diff:/var/lib/docker/overlay2/cb4c669639025cf7733d34334313a090f346c95738fc907fb710fed890639f21/diff:/var/lib/docker/overlay2/07a024b847b7aac0978eb44222f9d3712dbc48d8cec8c6625855545a2c7ae448/diff:/var/lib/docker/overlay2/c0e7b154b472a3a21ee8d2f02c69d7b7923e50406f7f70062e6056026f200dc8/diff:/var/lib/docker/overlay2/67cf95a091bedd6dd0dbbd8c25178898a0e2b02be83c46fc6c2f8a1c2f02674c/diff:/var/lib/docker/overlay2/db72c6
5c7d673f0864a2c0a5dc96d808e37979b3bd687e68158a2d8b5f117825/diff:/var/lib/docker/overlay2/afa7c8c68e434d7b0de4251dc5611f8df1982d005845aac4e890a7763846c981/diff:/var/lib/docker/overlay2/fa6a8262350ae704a34d604b280f0219188f238c96c1c00402284867c62dac9b/diff:/var/lib/docker/overlay2/b4ca49622151ae6f59da73489bc287799c862b6ea5f501d50e1f5568054c19de/diff:/var/lib/docker/overlay2/f3031d98fba997baca831f6207d3037f01c7da3fa2ac76f99bc611d4168ee33d/diff:/var/lib/docker/overlay2/7e20f07fbf4fc050782fe533a5c0e929f5fc08e1bc1494470668b169763c14e1/diff:/var/lib/docker/overlay2/108580e5a81cfc53fb04e9d6b36ce60b75043b29247cc4d6cc19eb9b81647a00/diff:/var/lib/docker/overlay2/a9f59dd68b496ab360f729d54241666d174ab55664869e67a8872b60cef5ca12/diff:/var/lib/docker/overlay2/4df5325a696a9b14fda42b1aeecb02bc27cb1e67dfbfe21aebd6b8eed36b9e3f/diff:/var/lib/docker/overlay2/6dfcf99d0b9d662dcfca574f61cf73ee8594d9e744e5ae49a7f55923b03a2c3a/diff:/var/lib/docker/overlay2/788b405568bc01d169062393e0dc6283cf0059ff9c4d262121f5548e46e68538/diff:/var/lib/d
ocker/overlay2/5c97b209193d33b13a50d1687185e6cd6af95fc2ebc75386ff80276b8197dd1e/diff:/var/lib/docker/overlay2/da440649ee72d4860bc5e559781b8d7873edbb45b2c6f37e82dc24f079f83e0c/diff:/var/lib/docker/overlay2/7016f4daa0c096e4141802b7222e3b4a2b05adb7d8cd21ea4578ebfa5cbae6a6/diff:/var/lib/docker/overlay2/ccd68a33cfb3faead1e5b4385b11360c5b56778be7cdbe4efa2227562e8ddcb1/diff:/var/lib/docker/overlay2/74545a493ce056cee52ab09c3b4f220df28d765423d6b46ea239beb8dc5db2ef/diff:/var/lib/docker/overlay2/753aa6aaf840b5186887dd205ebd62e8710d5ccabe5170548680c6e559445c2a/diff:/var/lib/docker/overlay2/791a458f173b8bb0dcdb9d18488941b8cf19c4cb83afb22d0a1bacc9675a7654/diff:/var/lib/docker/overlay2/5458881ce1af74e401eb3a10606457d34825e34d90ef078277b1d964e7edb783/diff:/var/lib/docker/overlay2/03176f6e10f98e9bb8d69fb37b851b04cebee2c2ba458ba838dd363a0315cbab/diff:/var/lib/docker/overlay2/d27ebe6d556402a77e23f3b194246c9a208d71be67f92bcc1ce6604a32fe721d/diff:/var/lib/docker/overlay2/a0ceb7b63b2bc5cda2cc5445514898b332ca9cdcad2f73fa2035bca40e4
eaeae/diff:/var/lib/docker/overlay2/a7ec7247df2102087f04843233e5ba5cde0c4b60d27fb0569d4ec464928c509f/diff:/var/lib/docker/overlay2/8e48faefb8da020dbe9ffb682c540ffba4471404365c81b535f9dede181ff881/diff:/var/lib/docker/overlay2/e8dd2220075dfa1cdc1cf293daa1451d0a290c5d44378fbc7baecd5b67e12ef2/diff:/var/lib/docker/overlay2/cd999289e9e588853eb66d5862d1187afd3ace57b8f7b499ce99e6c9187d5543/diff:/var/lib/docker/overlay2/110bae2d3fa1ae298f0583dd243b411b79d8c104e55efd5dd4815c308b0b3208/diff:/var/lib/docker/overlay2/a667174986afec4ae0097f61dba34c02ba97ff6da900874c7b6c9276d2907fa4/diff:/var/lib/docker/overlay2/51327ffe92a17b372d59dbcd6875765f88c1ba0c4a6690d2f70c100d5201a353/diff:/var/lib/docker/overlay2/83b73e1c1aa1081d71e6f2c9710707bd523127816a44f4417a384d4b4e619fbb/diff:/var/lib/docker/overlay2/81486e42af37b860bd9c67a17c8f61366893b7477e9ec207373ec068cfd5e93f/diff:/var/lib/docker/overlay2/270d94a7876a2998769ca7a5234ebae1b59a1723fa38b22080253eed3ef983e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220718013653-4043",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220718013653-4043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220718013653-4043",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220718013653-4043",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220718013653-4043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4428f5b78e178aab500ec9eedba08f2da794cf2f0f5539c0505dc2ff5968e49f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51804"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51805"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51807"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4428f5b78e17",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220718013653-4043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1916bf53452",
	                        "ingress-addon-legacy-20220718013653-4043"
	                    ],
	                    "NetworkID": "708946da9e31a487d9f5be16ad672686b5da8fd0bbeded04c042a72d1745bd29",
	                    "EndpointID": "5fb40dfca82a0057f884935e69641935f592cee4b04d5f7d6869f448da780326",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220718013653-4043 -n ingress-addon-legacy-20220718013653-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220718013653-4043 -n ingress-addon-legacy-20220718013653-4043: exit status 6 (434.986118ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 01:44:05.667108    8782 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220718013653-4043" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220718013653-4043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220718013653-4043
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220718013653-4043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4",
	        "Created": "2022-07-18T08:37:02.640921336Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36390,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-18T08:37:02.941290261Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/hosts",
	        "LogPath": "/var/lib/docker/containers/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4/d1916bf5345235ddbb0cc0b15dc71e5bc196e0da7da2b1a2925b0cd763e4e6a4-json.log",
	        "Name": "/ingress-addon-legacy-20220718013653-4043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220718013653-4043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220718013653-4043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3-init/diff:/var/lib/docker/overlay2/0155a28c1e691808bc7254363e6dbbca6bc736daa4a53efd06256136b9ccffc8/diff:/var/lib/docker/overlay2/785390ccfbe02ea2164ea7e4302ae44e311173f76acb63eabfcd4d68015d6e52/diff:/var/lib/docker/overlay2/df96474ebe21bf6fbcd3bf91d41d4194dd9fd81f0094fb1d72f6fda01994c351/diff:/var/lib/docker/overlay2/f4dc7db8eacf000538efa6fa8558bdcb747d4066e51ec1c358a773c2e09271a7/diff:/var/lib/docker/overlay2/aa4c8b0ec96277efded678498713e53c1b70a751b6fd7dc7ccee9a6e05b5b3f8/diff:/var/lib/docker/overlay2/cb4c669639025cf7733d34334313a090f346c95738fc907fb710fed890639f21/diff:/var/lib/docker/overlay2/07a024b847b7aac0978eb44222f9d3712dbc48d8cec8c6625855545a2c7ae448/diff:/var/lib/docker/overlay2/c0e7b154b472a3a21ee8d2f02c69d7b7923e50406f7f70062e6056026f200dc8/diff:/var/lib/docker/overlay2/67cf95a091bedd6dd0dbbd8c25178898a0e2b02be83c46fc6c2f8a1c2f02674c/diff:/var/lib/docker/overlay2/db72c6
5c7d673f0864a2c0a5dc96d808e37979b3bd687e68158a2d8b5f117825/diff:/var/lib/docker/overlay2/afa7c8c68e434d7b0de4251dc5611f8df1982d005845aac4e890a7763846c981/diff:/var/lib/docker/overlay2/fa6a8262350ae704a34d604b280f0219188f238c96c1c00402284867c62dac9b/diff:/var/lib/docker/overlay2/b4ca49622151ae6f59da73489bc287799c862b6ea5f501d50e1f5568054c19de/diff:/var/lib/docker/overlay2/f3031d98fba997baca831f6207d3037f01c7da3fa2ac76f99bc611d4168ee33d/diff:/var/lib/docker/overlay2/7e20f07fbf4fc050782fe533a5c0e929f5fc08e1bc1494470668b169763c14e1/diff:/var/lib/docker/overlay2/108580e5a81cfc53fb04e9d6b36ce60b75043b29247cc4d6cc19eb9b81647a00/diff:/var/lib/docker/overlay2/a9f59dd68b496ab360f729d54241666d174ab55664869e67a8872b60cef5ca12/diff:/var/lib/docker/overlay2/4df5325a696a9b14fda42b1aeecb02bc27cb1e67dfbfe21aebd6b8eed36b9e3f/diff:/var/lib/docker/overlay2/6dfcf99d0b9d662dcfca574f61cf73ee8594d9e744e5ae49a7f55923b03a2c3a/diff:/var/lib/docker/overlay2/788b405568bc01d169062393e0dc6283cf0059ff9c4d262121f5548e46e68538/diff:/var/lib/d
ocker/overlay2/5c97b209193d33b13a50d1687185e6cd6af95fc2ebc75386ff80276b8197dd1e/diff:/var/lib/docker/overlay2/da440649ee72d4860bc5e559781b8d7873edbb45b2c6f37e82dc24f079f83e0c/diff:/var/lib/docker/overlay2/7016f4daa0c096e4141802b7222e3b4a2b05adb7d8cd21ea4578ebfa5cbae6a6/diff:/var/lib/docker/overlay2/ccd68a33cfb3faead1e5b4385b11360c5b56778be7cdbe4efa2227562e8ddcb1/diff:/var/lib/docker/overlay2/74545a493ce056cee52ab09c3b4f220df28d765423d6b46ea239beb8dc5db2ef/diff:/var/lib/docker/overlay2/753aa6aaf840b5186887dd205ebd62e8710d5ccabe5170548680c6e559445c2a/diff:/var/lib/docker/overlay2/791a458f173b8bb0dcdb9d18488941b8cf19c4cb83afb22d0a1bacc9675a7654/diff:/var/lib/docker/overlay2/5458881ce1af74e401eb3a10606457d34825e34d90ef078277b1d964e7edb783/diff:/var/lib/docker/overlay2/03176f6e10f98e9bb8d69fb37b851b04cebee2c2ba458ba838dd363a0315cbab/diff:/var/lib/docker/overlay2/d27ebe6d556402a77e23f3b194246c9a208d71be67f92bcc1ce6604a32fe721d/diff:/var/lib/docker/overlay2/a0ceb7b63b2bc5cda2cc5445514898b332ca9cdcad2f73fa2035bca40e4
eaeae/diff:/var/lib/docker/overlay2/a7ec7247df2102087f04843233e5ba5cde0c4b60d27fb0569d4ec464928c509f/diff:/var/lib/docker/overlay2/8e48faefb8da020dbe9ffb682c540ffba4471404365c81b535f9dede181ff881/diff:/var/lib/docker/overlay2/e8dd2220075dfa1cdc1cf293daa1451d0a290c5d44378fbc7baecd5b67e12ef2/diff:/var/lib/docker/overlay2/cd999289e9e588853eb66d5862d1187afd3ace57b8f7b499ce99e6c9187d5543/diff:/var/lib/docker/overlay2/110bae2d3fa1ae298f0583dd243b411b79d8c104e55efd5dd4815c308b0b3208/diff:/var/lib/docker/overlay2/a667174986afec4ae0097f61dba34c02ba97ff6da900874c7b6c9276d2907fa4/diff:/var/lib/docker/overlay2/51327ffe92a17b372d59dbcd6875765f88c1ba0c4a6690d2f70c100d5201a353/diff:/var/lib/docker/overlay2/83b73e1c1aa1081d71e6f2c9710707bd523127816a44f4417a384d4b4e619fbb/diff:/var/lib/docker/overlay2/81486e42af37b860bd9c67a17c8f61366893b7477e9ec207373ec068cfd5e93f/diff:/var/lib/docker/overlay2/270d94a7876a2998769ca7a5234ebae1b59a1723fa38b22080253eed3ef983e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b56013bbefde8f80051f21416c3d7d8908f34e7e383a66d9dae184b0055cc0d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220718013653-4043",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220718013653-4043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220718013653-4043",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220718013653-4043",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220718013653-4043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4428f5b78e178aab500ec9eedba08f2da794cf2f0f5539c0505dc2ff5968e49f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51804"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51805"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51807"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4428f5b78e17",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220718013653-4043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1916bf53452",
	                        "ingress-addon-legacy-20220718013653-4043"
	                    ],
	                    "NetworkID": "708946da9e31a487d9f5be16ad672686b5da8fd0bbeded04c042a72d1745bd29",
	                    "EndpointID": "5fb40dfca82a0057f884935e69641935f592cee4b04d5f7d6869f448da780326",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220718013653-4043 -n ingress-addon-legacy-20220718013653-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220718013653-4043 -n ingress-addon-legacy-20220718013653-4043: exit status 6 (432.196567ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 01:44:06.173181    8795 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220718013653-4043" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220718013653-4043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.51s)

                                                
                                    
x
+
TestPreload (266.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220718015649-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0718 01:56:52.724369    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 02:00:13.675451    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 02:00:29.668266    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220718015649-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m23.365676184s)

                                                
                                                
-- stdout --
	* [test-preload-20220718015649-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220718015649-4043 in cluster test-preload-20220718015649-4043
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:56:49.998852   12531 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:56:49.999428   12531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:56:49.999438   12531 out.go:309] Setting ErrFile to fd 2...
	I0718 01:56:49.999445   12531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:56:49.999702   12531 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:56:50.000676   12531 out.go:303] Setting JSON to false
	I0718 01:56:50.016274   12531 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3383,"bootTime":1658131227,"procs":369,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 01:56:50.016373   12531 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 01:56:50.038568   12531 out.go:177] * [test-preload-20220718015649-4043] minikube v1.26.0 on Darwin 12.4
	I0718 01:56:50.060885   12531 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 01:56:50.060890   12531 notify.go:193] Checking for updates...
	I0718 01:56:50.104452   12531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 01:56:50.125475   12531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 01:56:50.146678   12531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 01:56:50.168613   12531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 01:56:50.191113   12531 driver.go:360] Setting default libvirt URI to qemu:///system
	I0718 01:56:50.262466   12531 docker.go:137] docker version: linux-20.10.17
	I0718 01:56:50.262593   12531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:56:50.396905   12531 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-18 08:56:50.332438518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:56:50.440696   12531 out.go:177] * Using the docker driver based on user configuration
	I0718 01:56:50.462914   12531 start.go:284] selected driver: docker
	I0718 01:56:50.462938   12531 start.go:808] validating driver "docker" against <nil>
	I0718 01:56:50.462961   12531 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 01:56:50.466208   12531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:56:50.601432   12531 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-18 08:56:50.53700849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:56:50.601614   12531 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0718 01:56:50.601782   12531 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 01:56:50.623927   12531 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 01:56:50.645613   12531 cni.go:95] Creating CNI manager for ""
	I0718 01:56:50.645646   12531 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 01:56:50.645670   12531 start_flags.go:310] config:
	{Name:test-preload-20220718015649-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220718015649-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:56:50.667723   12531 out.go:177] * Starting control plane node test-preload-20220718015649-4043 in cluster test-preload-20220718015649-4043
	I0718 01:56:50.711694   12531 cache.go:120] Beginning downloading kic base image for docker with docker
	I0718 01:56:50.733820   12531 out.go:177] * Pulling base image ...
	I0718 01:56:50.782288   12531 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0718 01:56:50.782329   12531 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0718 01:56:50.782602   12531 cache.go:107] acquiring lock: {Name:mkb949a99fd957c748a8dd90dc19bbec9cb91f41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.782603   12531 cache.go:107] acquiring lock: {Name:mkd7825d247038faef026cb191a92ee40f0fd769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.784428   12531 cache.go:107] acquiring lock: {Name:mkfeb9445c1fd57f081aa2e3d1ad6495072319f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.784650   12531 cache.go:107] acquiring lock: {Name:mkeaa111577ac9221d39b19ef43a956d86e7d4a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.784835   12531 cache.go:107] acquiring lock: {Name:mkf642d1e10b0536fb22a85033711b91fd689864 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.784856   12531 cache.go:107] acquiring lock: {Name:mkb85c3c3c995092aaf77ff22cc19f1787bfd87f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.784877   12531 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 01:56:50.784936   12531 cache.go:107] acquiring lock: {Name:mkf75012a2ce06322c1dc48befffa97f9e4bd182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.784992   12531 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.40281ms
	I0718 01:56:50.785038   12531 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 01:56:50.785045   12531 cache.go:107] acquiring lock: {Name:mkd0aba3591c49fe190092d0ebfeb4e9b7028e5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.785758   12531 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:50.785890   12531 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:50.785927   12531 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:50.785963   12531 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:50.785989   12531 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0718 01:56:50.785995   12531 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:50.786039   12531 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0718 01:56:50.786019   12531 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/config.json ...
	I0718 01:56:50.786087   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/config.json: {Name:mka13bb83eaeb9d59b0495af4e774cb8311a86d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:56:50.798879   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:50.799256   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:50.800186   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:50.801096   12531 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:50.801451   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:50.801632   12531 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0718 01:56:50.801960   12531 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0718 01:56:50.854552   12531 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0718 01:56:50.854575   12531 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0718 01:56:50.854591   12531 cache.go:208] Successfully downloaded all kic artifacts
	I0718 01:56:50.854625   12531 start.go:352] acquiring machines lock for test-preload-20220718015649-4043: {Name:mk07e38dc9a009e48fb017d34b3bd5a952175234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:56:50.854768   12531 start.go:356] acquired machines lock for "test-preload-20220718015649-4043" in 130.791µs
	I0718 01:56:50.854792   12531 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220718015649-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220718015649-4043 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 01:56:50.854887   12531 start.go:131] createHost starting for "" (driver="docker")
	I0718 01:56:50.897330   12531 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 01:56:50.897597   12531 start.go:165] libmachine.API.Create for "test-preload-20220718015649-4043" (driver="docker")
	I0718 01:56:50.897626   12531 client.go:168] LocalClient.Create starting
	I0718 01:56:50.897701   12531 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem
	I0718 01:56:50.897737   12531 main.go:134] libmachine: Decoding PEM data...
	I0718 01:56:50.897751   12531 main.go:134] libmachine: Parsing certificate...
	I0718 01:56:50.897812   12531 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem
	I0718 01:56:50.897835   12531 main.go:134] libmachine: Decoding PEM data...
	I0718 01:56:50.897860   12531 main.go:134] libmachine: Parsing certificate...
	I0718 01:56:50.898294   12531 cli_runner.go:164] Run: docker network inspect test-preload-20220718015649-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 01:56:50.965934   12531 cli_runner.go:211] docker network inspect test-preload-20220718015649-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 01:56:50.966008   12531 network_create.go:272] running [docker network inspect test-preload-20220718015649-4043] to gather additional debugging logs...
	I0718 01:56:50.966023   12531 cli_runner.go:164] Run: docker network inspect test-preload-20220718015649-4043
	W0718 01:56:51.033051   12531 cli_runner.go:211] docker network inspect test-preload-20220718015649-4043 returned with exit code 1
	I0718 01:56:51.033076   12531 network_create.go:275] error running [docker network inspect test-preload-20220718015649-4043]: docker network inspect test-preload-20220718015649-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220718015649-4043
	I0718 01:56:51.033105   12531 network_create.go:277] output of [docker network inspect test-preload-20220718015649-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220718015649-4043
	
	** /stderr **
	I0718 01:56:51.033162   12531 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 01:56:51.100370   12531 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007f0348] misses:0}
	I0718 01:56:51.100407   12531 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 01:56:51.100422   12531 network_create.go:115] attempt to create docker network test-preload-20220718015649-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0718 01:56:51.100481   12531 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 test-preload-20220718015649-4043
	W0718 01:56:51.164873   12531 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 test-preload-20220718015649-4043 returned with exit code 1
	W0718 01:56:51.164916   12531 network_create.go:107] failed to create docker network test-preload-20220718015649-4043 192.168.49.0/24, will retry: subnet is taken
	I0718 01:56:51.165167   12531 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f0348] amended:false}} dirty:map[] misses:0}
	I0718 01:56:51.165184   12531 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 01:56:51.165382   12531 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f0348] amended:true}} dirty:map[192.168.49.0:0xc0007f0348 192.168.58.0:0xc00000f588] misses:0}
	I0718 01:56:51.165397   12531 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 01:56:51.165405   12531 network_create.go:115] attempt to create docker network test-preload-20220718015649-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0718 01:56:51.165466   12531 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 test-preload-20220718015649-4043
	W0718 01:56:51.229305   12531 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 test-preload-20220718015649-4043 returned with exit code 1
	W0718 01:56:51.229340   12531 network_create.go:107] failed to create docker network test-preload-20220718015649-4043 192.168.58.0/24, will retry: subnet is taken
	I0718 01:56:51.229588   12531 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f0348] amended:true}} dirty:map[192.168.49.0:0xc0007f0348 192.168.58.0:0xc00000f588] misses:1}
	I0718 01:56:51.229607   12531 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 01:56:51.229799   12531 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f0348] amended:true}} dirty:map[192.168.49.0:0xc0007f0348 192.168.58.0:0xc00000f588 192.168.67.0:0xc0003b8378] misses:1}
	I0718 01:56:51.229814   12531 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 01:56:51.229822   12531 network_create.go:115] attempt to create docker network test-preload-20220718015649-4043 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0718 01:56:51.229875   12531 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 test-preload-20220718015649-4043
	I0718 01:56:51.326487   12531 network_create.go:99] docker network test-preload-20220718015649-4043 192.168.67.0/24 created
	I0718 01:56:51.326512   12531 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220718015649-4043" container
	I0718 01:56:51.326602   12531 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 01:56:51.390905   12531 cli_runner.go:164] Run: docker volume create test-preload-20220718015649-4043 --label name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 --label created_by.minikube.sigs.k8s.io=true
	I0718 01:56:51.455886   12531 oci.go:103] Successfully created a docker volume test-preload-20220718015649-4043
	I0718 01:56:51.455980   12531 cli_runner.go:164] Run: docker run --rm --name test-preload-20220718015649-4043-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 --entrypoint /usr/bin/test -v test-preload-20220718015649-4043:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0718 01:56:51.913028   12531 oci.go:107] Successfully prepared a docker volume test-preload-20220718015649-4043
	I0718 01:56:51.913062   12531 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0718 01:56:51.913138   12531 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0718 01:56:52.048742   12531 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220718015649-4043 --name test-preload-20220718015649-4043 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220718015649-4043 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220718015649-4043 --network test-preload-20220718015649-4043 --ip 192.168.67.2 --volume test-preload-20220718015649-4043:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0718 01:56:52.300394   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0718 01:56:52.415469   12531 cli_runner.go:164] Run: docker container inspect test-preload-20220718015649-4043 --format={{.State.Running}}
	I0718 01:56:52.438900   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0718 01:56:52.439973   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0718 01:56:52.441227   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0718 01:56:52.489634   12531 cli_runner.go:164] Run: docker container inspect test-preload-20220718015649-4043 --format={{.State.Status}}
	I0718 01:56:52.494773   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0718 01:56:52.547753   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0718 01:56:52.548436   12531 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0718 01:56:52.565990   12531 cli_runner.go:164] Run: docker exec test-preload-20220718015649-4043 stat /var/lib/dpkg/alternatives/iptables
	I0718 01:56:52.631946   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0718 01:56:52.631970   12531 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 1.847102335s
	I0718 01:56:52.631986   12531 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0718 01:56:52.706743   12531 oci.go:144] the created container "test-preload-20220718015649-4043" has a running status.
	I0718 01:56:52.706766   12531 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa...
	I0718 01:56:52.969419   12531 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0718 01:56:53.099053   12531 cli_runner.go:164] Run: docker container inspect test-preload-20220718015649-4043 --format={{.State.Status}}
	I0718 01:56:53.101740   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0718 01:56:53.101771   12531 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 2.317026537s
	I0718 01:56:53.101827   12531 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0718 01:56:53.173479   12531 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0718 01:56:53.173494   12531 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220718015649-4043 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0718 01:56:53.290930   12531 cli_runner.go:164] Run: docker container inspect test-preload-20220718015649-4043 --format={{.State.Status}}
	I0718 01:56:53.360359   12531 machine.go:88] provisioning docker machine ...
	I0718 01:56:53.360400   12531 ubuntu.go:169] provisioning hostname "test-preload-20220718015649-4043"
	I0718 01:56:53.360496   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:53.428977   12531 main.go:134] libmachine: Using SSH client type: native
	I0718 01:56:53.429197   12531 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55351 <nil> <nil>}
	I0718 01:56:53.429212   12531 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220718015649-4043 && echo "test-preload-20220718015649-4043" | sudo tee /etc/hostname
	I0718 01:56:53.555750   12531 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220718015649-4043
	
	I0718 01:56:53.555850   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:53.631156   12531 main.go:134] libmachine: Using SSH client type: native
	I0718 01:56:53.631294   12531 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55351 <nil> <nil>}
	I0718 01:56:53.631312   12531 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220718015649-4043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220718015649-4043/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220718015649-4043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 01:56:53.751422   12531 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0718 01:56:53.751441   12531 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube}
	I0718 01:56:53.751466   12531 ubuntu.go:177] setting up certificates
	I0718 01:56:53.751472   12531 provision.go:83] configureAuth start
	I0718 01:56:53.751547   12531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220718015649-4043
	I0718 01:56:53.774856   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0718 01:56:53.774880   12531 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 2.992268815s
	I0718 01:56:53.774896   12531 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0718 01:56:53.827006   12531 provision.go:138] copyHostCerts
	I0718 01:56:53.827094   12531 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem, removing ...
	I0718 01:56:53.827104   12531 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem
	I0718 01:56:53.827199   12531 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.pem (1078 bytes)
	I0718 01:56:53.827407   12531 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem, removing ...
	I0718 01:56:53.827415   12531 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem
	I0718 01:56:53.827486   12531 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cert.pem (1123 bytes)
	I0718 01:56:53.827674   12531 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem, removing ...
	I0718 01:56:53.827681   12531 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem
	I0718 01:56:53.827749   12531 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/key.pem (1675 bytes)
	I0718 01:56:53.827874   12531 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220718015649-4043 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220718015649-4043]
	I0718 01:56:53.937359   12531 provision.go:172] copyRemoteCerts
	I0718 01:56:53.937424   12531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 01:56:53.937473   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:54.006929   12531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55351 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa Username:docker}
	I0718 01:56:54.093434   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0718 01:56:54.110541   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0718 01:56:54.128312   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 01:56:54.135541   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0718 01:56:54.135560   12531 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 3.352830414s
	I0718 01:56:54.135583   12531 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0718 01:56:54.148942   12531 provision.go:86] duration metric: configureAuth took 397.452792ms
	I0718 01:56:54.148954   12531 ubuntu.go:193] setting minikube options for container-runtime
	I0718 01:56:54.149086   12531 config.go:178] Loaded profile config "test-preload-20220718015649-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0718 01:56:54.149136   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:54.220938   12531 main.go:134] libmachine: Using SSH client type: native
	I0718 01:56:54.221124   12531 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55351 <nil> <nil>}
	I0718 01:56:54.221144   12531 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 01:56:54.342217   12531 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0718 01:56:54.342230   12531 ubuntu.go:71] root file system type: overlay
	I0718 01:56:54.342379   12531 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 01:56:54.342451   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:54.412135   12531 main.go:134] libmachine: Using SSH client type: native
	I0718 01:56:54.412292   12531 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55351 <nil> <nil>}
	I0718 01:56:54.412340   12531 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 01:56:54.541865   12531 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 01:56:54.541946   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:54.610672   12531 main.go:134] libmachine: Using SSH client type: native
	I0718 01:56:54.610817   12531 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55351 <nil> <nil>}
	I0718 01:56:54.610831   12531 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 01:56:54.896511   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0718 01:56:54.896537   12531 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 4.113815635s
	I0718 01:56:54.896548   12531 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0718 01:56:54.987331   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0718 01:56:54.987350   12531 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 4.20270196s
	I0718 01:56:54.987365   12531 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0718 01:56:55.087025   12531 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0718 01:56:55.087048   12531 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 4.302402037s
	I0718 01:56:55.087057   12531 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0718 01:56:55.087072   12531 cache.go:87] Successfully saved all images to host disk.
	I0718 01:56:55.329748   12531 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-18 08:56:54.553922252 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0718 01:56:55.329772   12531 machine.go:91] provisioned docker machine in 1.969368826s
	I0718 01:56:55.329786   12531 client.go:171] LocalClient.Create took 4.432102125s
	I0718 01:56:55.329842   12531 start.go:173] duration metric: libmachine.API.Create for "test-preload-20220718015649-4043" took 4.432167796s
	I0718 01:56:55.329851   12531 start.go:306] post-start starting for "test-preload-20220718015649-4043" (driver="docker")
	I0718 01:56:55.329855   12531 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 01:56:55.329917   12531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 01:56:55.329967   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:55.401512   12531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55351 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa Username:docker}
	I0718 01:56:55.489412   12531 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 01:56:55.493144   12531 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 01:56:55.493158   12531 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 01:56:55.493165   12531 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 01:56:55.493172   12531 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0718 01:56:55.493181   12531 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/addons for local assets ...
	I0718 01:56:55.493296   12531 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files for local assets ...
	I0718 01:56:55.493438   12531 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem -> 40432.pem in /etc/ssl/certs
	I0718 01:56:55.493592   12531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 01:56:55.500587   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem --> /etc/ssl/certs/40432.pem (1708 bytes)
	I0718 01:56:55.518131   12531 start.go:309] post-start completed in 188.269801ms
	I0718 01:56:55.518630   12531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220718015649-4043
	I0718 01:56:55.586610   12531 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/config.json ...
	I0718 01:56:55.587019   12531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 01:56:55.587080   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:55.655868   12531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55351 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa Username:docker}
	I0718 01:56:55.739629   12531 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 01:56:55.744563   12531 start.go:134] duration metric: createHost completed in 4.889607432s
	I0718 01:56:55.744582   12531 start.go:81] releasing machines lock for "test-preload-20220718015649-4043", held for 4.889746441s
	I0718 01:56:55.744657   12531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220718015649-4043
	I0718 01:56:55.814211   12531 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0718 01:56:55.814218   12531 ssh_runner.go:195] Run: systemctl --version
	I0718 01:56:55.814274   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:55.814280   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:55.889625   12531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55351 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa Username:docker}
	I0718 01:56:55.890467   12531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55351 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/test-preload-20220718015649-4043/id_rsa Username:docker}
	I0718 01:56:56.468317   12531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 01:56:56.477826   12531 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0718 01:56:56.477876   12531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 01:56:56.486896   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 01:56:56.499340   12531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 01:56:56.569838   12531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 01:56:56.642982   12531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 01:56:56.715392   12531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 01:56:56.907400   12531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 01:56:56.945804   12531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 01:56:57.023785   12531 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0718 01:56:57.024008   12531 cli_runner.go:164] Run: docker exec -t test-preload-20220718015649-4043 dig +short host.docker.internal
	I0718 01:56:57.148510   12531 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0718 01:56:57.148606   12531 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0718 01:56:57.152981   12531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 01:56:57.162371   12531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220718015649-4043
	I0718 01:56:57.231538   12531 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0718 01:56:57.231615   12531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 01:56:57.261519   12531 docker.go:602] Got preloaded images: 
	I0718 01:56:57.261531   12531 docker.go:608] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0718 01:56:57.261535   12531 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0718 01:56:57.268106   12531 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 01:56:57.268218   12531 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0718 01:56:57.268586   12531 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0718 01:56:57.269172   12531 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:57.270077   12531 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:57.270262   12531 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:57.271068   12531 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:57.271485   12531 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:57.275703   12531 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 01:56:57.276699   12531 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0718 01:56:57.276773   12531 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0718 01:56:57.278017   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:57.278225   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:57.278501   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:57.278797   12531 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:57.279278   12531 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:58.590513   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0718 01:56:58.590659   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0718 01:56:58.624082   12531 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0718 01:56:58.624106   12531 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0718 01:56:58.624122   12531 docker.go:283] Removing image: k8s.gcr.io/coredns:1.6.5
	I0718 01:56:58.624123   12531 docker.go:283] Removing image: k8s.gcr.io/pause:3.1
	I0718 01:56:58.624177   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0718 01:56:58.624178   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0718 01:56:58.644631   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:58.658055   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0718 01:56:58.658090   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0718 01:56:58.658189   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0718 01:56:58.658192   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0718 01:56:58.664788   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:58.690614   12531 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0718 01:56:58.690640   12531 docker.go:283] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:58.690653   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0718 01:56:58.690685   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0718 01:56:58.690690   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0718 01:56:58.690687   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0718 01:56:58.690701   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0718 01:56:58.696538   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:58.728217   12531 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0718 01:56:58.728244   12531 docker.go:283] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:58.728303   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0718 01:56:58.742622   12531 docker.go:250] Loading image: /var/lib/minikube/images/pause_3.1
	I0718 01:56:58.742643   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0718 01:56:58.758353   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0718 01:56:58.758489   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0718 01:56:58.776415   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:58.809290   12531 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0718 01:56:58.809316   12531 docker.go:283] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:58.809369   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0718 01:56:58.827672   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0718 01:56:58.827808   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0718 01:56:58.883291   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:59.020060   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0718 01:56:59.020082   12531 docker.go:250] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0718 01:56:59.020094   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0718 01:56:59.020114   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0718 01:56:59.020147   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0718 01:56:59.020155   12531 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0718 01:56:59.020182   12531 docker.go:283] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:59.020200   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0718 01:56:59.020233   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0718 01:56:59.020277   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0718 01:56:59.020296   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0718 01:56:59.020366   12531 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0718 01:56:59.020370   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0718 01:56:59.020392   12531 docker.go:283] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:59.020436   12531 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0718 01:56:59.154959   12531 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 01:56:59.977436   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0718 01:56:59.977528   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0718 01:56:59.977535   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0718 01:56:59.977561   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0718 01:56:59.977585   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0718 01:56:59.977637   12531 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0718 01:56:59.977656   12531 docker.go:283] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 01:56:59.977677   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0718 01:56:59.977686   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0718 01:56:59.977699   12531 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 01:57:00.078749   12531 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0718 01:57:00.078776   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0718 01:57:00.078801   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0718 01:57:00.078839   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0718 01:57:00.078873   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0718 01:57:00.078881   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0718 01:57:00.139728   12531 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0718 01:57:00.139764   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0718 01:57:01.125385   12531 docker.go:250] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0718 01:57:01.125400   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0718 01:57:01.769416   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0718 01:57:01.912645   12531 docker.go:250] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0718 01:57:01.912665   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0718 01:57:04.803370   12531 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (2.8906555s)
	I0718 01:57:04.803385   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0718 01:57:04.803465   12531 docker.go:250] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0718 01:57:04.803477   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0718 01:57:05.898418   12531 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load": (1.094915316s)
	I0718 01:57:05.898432   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0718 01:57:05.898489   12531 docker.go:250] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0718 01:57:05.898497   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0718 01:57:06.354732   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0718 01:57:06.354761   12531 docker.go:250] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0718 01:57:06.354774   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0718 01:57:07.307302   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0718 01:57:07.307400   12531 docker.go:250] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0718 01:57:07.307433   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0718 01:57:10.498796   12531 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.191308945s)
	I0718 01:57:10.498818   12531 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0718 01:57:10.498847   12531 cache_images.go:123] Successfully loaded all cached images
	I0718 01:57:10.498852   12531 cache_images.go:92] LoadImages completed in 13.23715147s
	I0718 01:57:10.498981   12531 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 01:57:10.570897   12531 cni.go:95] Creating CNI manager for ""
	I0718 01:57:10.570909   12531 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 01:57:10.570919   12531 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0718 01:57:10.570930   12531 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220718015649-4043 NodeName:test-preload-20220718015649-4043 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0718 01:57:10.571043   12531 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220718015649-4043"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 01:57:10.571111   12531 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220718015649-4043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220718015649-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0718 01:57:10.571166   12531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0718 01:57:10.579467   12531 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0718 01:57:10.579524   12531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0718 01:57:10.586781   12531 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0718 01:57:10.586794   12531 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0718 01:57:10.586802   12531 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0718 01:57:11.105893   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0718 01:57:11.110511   12531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0718 01:57:11.110543   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0718 01:57:11.237400   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0718 01:57:11.307832   12531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0718 01:57:11.307866   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0718 01:57:11.943588   12531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 01:57:12.023234   12531 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0718 01:57:12.088896   12531 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0718 01:57:12.088925   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0718 01:57:14.298387   12531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 01:57:14.306455   12531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0718 01:57:14.319885   12531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 01:57:14.333195   12531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0718 01:57:14.345816   12531 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0718 01:57:14.349654   12531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 01:57:14.359109   12531 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043 for IP: 192.168.67.2
	I0718 01:57:14.359220   12531 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.key
	I0718 01:57:14.359270   12531 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.key
	I0718 01:57:14.359307   12531 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/client.key
	I0718 01:57:14.359320   12531 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/client.crt with IP's: []
	I0718 01:57:14.492982   12531 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/client.crt ...
	I0718 01:57:14.492998   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/client.crt: {Name:mk0972962855e6fc21671195d44521e2200268c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:57:14.493324   12531 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/client.key ...
	I0718 01:57:14.493332   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/client.key: {Name:mka3dc5d45db0c66fb6a1fef09e149204cc8b5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:57:14.493551   12531 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.key.c7fa3a9e
	I0718 01:57:14.493569   12531 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0718 01:57:14.630284   12531 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.crt.c7fa3a9e ...
	I0718 01:57:14.630299   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.crt.c7fa3a9e: {Name:mk18c1020a36f6b9bf2d3d814adc34d505ebc099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:57:14.630568   12531 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.key.c7fa3a9e ...
	I0718 01:57:14.630575   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.key.c7fa3a9e: {Name:mk2f77445716b51eae2679cf726cf59c727caff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:57:14.630760   12531 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.crt
	I0718 01:57:14.630923   12531 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.key
	I0718 01:57:14.631073   12531 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.key
	I0718 01:57:14.631089   12531 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.crt with IP's: []
	I0718 01:57:14.735379   12531 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.crt ...
	I0718 01:57:14.735387   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.crt: {Name:mk654e6a9b82a1c8f3472234bcc2100dd47fb574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:57:14.735604   12531 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.key ...
	I0718 01:57:14.735612   12531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.key: {Name:mkd5de5a628faad9fdd3eb81f3f8fda40f90e45f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:57:14.735987   12531 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043.pem (1338 bytes)
	W0718 01:57:14.736031   12531 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043_empty.pem, impossibly tiny 0 bytes
	I0718 01:57:14.736040   12531 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 01:57:14.736075   12531 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem (1078 bytes)
	I0718 01:57:14.736106   12531 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem (1123 bytes)
	I0718 01:57:14.736134   12531 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/key.pem (1675 bytes)
	I0718 01:57:14.736197   12531 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem (1708 bytes)
	I0718 01:57:14.736716   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0718 01:57:14.755204   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 01:57:14.772230   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 01:57:14.789815   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/test-preload-20220718015649-4043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 01:57:14.806961   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 01:57:14.824196   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 01:57:14.841583   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 01:57:14.859996   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 01:57:14.877528   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 01:57:14.894860   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/4043.pem --> /usr/share/ca-certificates/4043.pem (1338 bytes)
	I0718 01:57:14.913141   12531 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/ssl/certs/40432.pem --> /usr/share/ca-certificates/40432.pem (1708 bytes)
	I0718 01:57:14.931232   12531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 01:57:14.945671   12531 ssh_runner.go:195] Run: openssl version
	I0718 01:57:14.951120   12531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 01:57:14.959458   12531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:57:14.963610   12531 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 18 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:57:14.963666   12531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 01:57:14.969121   12531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 01:57:14.977123   12531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4043.pem && ln -fs /usr/share/ca-certificates/4043.pem /etc/ssl/certs/4043.pem"
	I0718 01:57:14.984824   12531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4043.pem
	I0718 01:57:14.988920   12531 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 18 08:32 /usr/share/ca-certificates/4043.pem
	I0718 01:57:14.988965   12531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4043.pem
	I0718 01:57:14.994635   12531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4043.pem /etc/ssl/certs/51391683.0"
	I0718 01:57:15.002441   12531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40432.pem && ln -fs /usr/share/ca-certificates/40432.pem /etc/ssl/certs/40432.pem"
	I0718 01:57:15.042712   12531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40432.pem
	I0718 01:57:15.047366   12531 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 18 08:32 /usr/share/ca-certificates/40432.pem
	I0718 01:57:15.047410   12531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40432.pem
	I0718 01:57:15.052747   12531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40432.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 01:57:15.060638   12531 kubeadm.go:395] StartCluster: {Name:test-preload-20220718015649-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220718015649-4043 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:57:15.060772   12531 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 01:57:15.089259   12531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 01:57:15.098530   12531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 01:57:15.105862   12531 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0718 01:57:15.105907   12531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 01:57:15.113029   12531 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 01:57:15.113053   12531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0718 01:57:15.841110   12531 out.go:204]   - Generating certificates and keys ...
	I0718 01:57:18.847149   12531 out.go:204]   - Booting up control plane ...
	W0718 01:59:13.769122   12531 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220718015649-4043 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220718015649-4043 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:57:15.165560    1577 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:57:15.165616    1577 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:57:18.842177    1577 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:57:18.843077    1577 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220718015649-4043 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220718015649-4043 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:57:15.165560    1577 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:57:15.165616    1577 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:57:18.842177    1577 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:57:18.843077    1577 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0718 01:59:13.769162   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0718 01:59:14.189117   12531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 01:59:14.198251   12531 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0718 01:59:14.198299   12531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 01:59:14.205613   12531 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 01:59:14.205631   12531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0718 01:59:14.915021   12531 out.go:204]   - Generating certificates and keys ...
	I0718 01:59:15.759829   12531 out.go:204]   - Booting up control plane ...
	I0718 02:01:10.699635   12531 kubeadm.go:397] StartCluster complete in 3m55.636190865s
	I0718 02:01:10.699707   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 02:01:10.727844   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.727855   12531 logs.go:276] No container was found matching "kube-apiserver"
	I0718 02:01:10.727914   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 02:01:10.757355   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.757367   12531 logs.go:276] No container was found matching "etcd"
	I0718 02:01:10.757422   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 02:01:10.785908   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.785921   12531 logs.go:276] No container was found matching "coredns"
	I0718 02:01:10.785986   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 02:01:10.814261   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.814273   12531 logs.go:276] No container was found matching "kube-scheduler"
	I0718 02:01:10.814333   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 02:01:10.843321   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.843333   12531 logs.go:276] No container was found matching "kube-proxy"
	I0718 02:01:10.843397   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0718 02:01:10.871483   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.871496   12531 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0718 02:01:10.871551   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 02:01:10.899378   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.899389   12531 logs.go:276] No container was found matching "storage-provisioner"
	I0718 02:01:10.899447   12531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 02:01:10.928329   12531 logs.go:274] 0 containers: []
	W0718 02:01:10.928343   12531 logs.go:276] No container was found matching "kube-controller-manager"
	I0718 02:01:10.928350   12531 logs.go:123] Gathering logs for dmesg ...
	I0718 02:01:10.928357   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 02:01:10.941372   12531 logs.go:123] Gathering logs for describe nodes ...
	I0718 02:01:10.941383   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0718 02:01:10.993269   12531 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0718 02:01:10.993280   12531 logs.go:123] Gathering logs for Docker ...
	I0718 02:01:10.993288   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0718 02:01:11.008042   12531 logs.go:123] Gathering logs for container status ...
	I0718 02:01:11.008053   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 02:01:13.062825   12531 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054734454s)
	I0718 02:01:13.062974   12531 logs.go:123] Gathering logs for kubelet ...
	I0718 02:01:13.062982   12531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0718 02:01:13.102045   12531 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:59:14.255385    3864 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:59:14.255438    3864 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:59:15.771567    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:59:15.773111    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0718 02:01:13.102066   12531 out.go:239] * 
	* 
	W0718 02:01:13.102175   12531 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:59:14.255385    3864 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:59:14.255438    3864 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:59:15.771567    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:59:15.773111    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:59:14.255385    3864 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:59:14.255438    3864 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:59:15.771567    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:59:15.773111    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0718 02:01:13.102203   12531 out.go:239] * 
	* 
	W0718 02:01:13.102675   12531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 02:01:13.167625   12531 out.go:177] 
	W0718 02:01:13.210025   12531 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:59:14.255385    3864 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:59:14.255438    3864 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:59:15.771567    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:59:15.773111    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0718 08:59:14.255385    3864 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0718 08:59:14.255438    3864 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0718 08:59:15.771567    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0718 08:59:15.773111    3864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0718 02:01:13.210167   12531 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0718 02:01:13.210248   12531 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0718 02:01:13.253770   12531 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220718015649-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-07-18 02:01:13.35831 -0700 PDT m=+2122.193710774
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220718015649-4043
helpers_test.go:235: (dbg) docker inspect test-preload-20220718015649-4043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "450f5c07457ffc580ad77d2964b4d29fe35c49104eb0d21e2ed683b42099da9b",
	        "Created": "2022-07-18T08:56:52.128129527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-18T08:56:52.419045266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/450f5c07457ffc580ad77d2964b4d29fe35c49104eb0d21e2ed683b42099da9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/450f5c07457ffc580ad77d2964b4d29fe35c49104eb0d21e2ed683b42099da9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/450f5c07457ffc580ad77d2964b4d29fe35c49104eb0d21e2ed683b42099da9b/hosts",
	        "LogPath": "/var/lib/docker/containers/450f5c07457ffc580ad77d2964b4d29fe35c49104eb0d21e2ed683b42099da9b/450f5c07457ffc580ad77d2964b4d29fe35c49104eb0d21e2ed683b42099da9b-json.log",
	        "Name": "/test-preload-20220718015649-4043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "test-preload-20220718015649-4043:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220718015649-4043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e1d89f5a44b64a2eddd19fe199210723ed8a79b867c0c53efe83d13a3492d70-init/diff:/var/lib/docker/overlay2/0155a28c1e691808bc7254363e6dbbca6bc736daa4a53efd06256136b9ccffc8/diff:/var/lib/docker/overlay2/785390ccfbe02ea2164ea7e4302ae44e311173f76acb63eabfcd4d68015d6e52/diff:/var/lib/docker/overlay2/df96474ebe21bf6fbcd3bf91d41d4194dd9fd81f0094fb1d72f6fda01994c351/diff:/var/lib/docker/overlay2/f4dc7db8eacf000538efa6fa8558bdcb747d4066e51ec1c358a773c2e09271a7/diff:/var/lib/docker/overlay2/aa4c8b0ec96277efded678498713e53c1b70a751b6fd7dc7ccee9a6e05b5b3f8/diff:/var/lib/docker/overlay2/cb4c669639025cf7733d34334313a090f346c95738fc907fb710fed890639f21/diff:/var/lib/docker/overlay2/07a024b847b7aac0978eb44222f9d3712dbc48d8cec8c6625855545a2c7ae448/diff:/var/lib/docker/overlay2/c0e7b154b472a3a21ee8d2f02c69d7b7923e50406f7f70062e6056026f200dc8/diff:/var/lib/docker/overlay2/67cf95a091bedd6dd0dbbd8c25178898a0e2b02be83c46fc6c2f8a1c2f02674c/diff:/var/lib/docker/overlay2/db72c6
5c7d673f0864a2c0a5dc96d808e37979b3bd687e68158a2d8b5f117825/diff:/var/lib/docker/overlay2/afa7c8c68e434d7b0de4251dc5611f8df1982d005845aac4e890a7763846c981/diff:/var/lib/docker/overlay2/fa6a8262350ae704a34d604b280f0219188f238c96c1c00402284867c62dac9b/diff:/var/lib/docker/overlay2/b4ca49622151ae6f59da73489bc287799c862b6ea5f501d50e1f5568054c19de/diff:/var/lib/docker/overlay2/f3031d98fba997baca831f6207d3037f01c7da3fa2ac76f99bc611d4168ee33d/diff:/var/lib/docker/overlay2/7e20f07fbf4fc050782fe533a5c0e929f5fc08e1bc1494470668b169763c14e1/diff:/var/lib/docker/overlay2/108580e5a81cfc53fb04e9d6b36ce60b75043b29247cc4d6cc19eb9b81647a00/diff:/var/lib/docker/overlay2/a9f59dd68b496ab360f729d54241666d174ab55664869e67a8872b60cef5ca12/diff:/var/lib/docker/overlay2/4df5325a696a9b14fda42b1aeecb02bc27cb1e67dfbfe21aebd6b8eed36b9e3f/diff:/var/lib/docker/overlay2/6dfcf99d0b9d662dcfca574f61cf73ee8594d9e744e5ae49a7f55923b03a2c3a/diff:/var/lib/docker/overlay2/788b405568bc01d169062393e0dc6283cf0059ff9c4d262121f5548e46e68538/diff:/var/lib/d
ocker/overlay2/5c97b209193d33b13a50d1687185e6cd6af95fc2ebc75386ff80276b8197dd1e/diff:/var/lib/docker/overlay2/da440649ee72d4860bc5e559781b8d7873edbb45b2c6f37e82dc24f079f83e0c/diff:/var/lib/docker/overlay2/7016f4daa0c096e4141802b7222e3b4a2b05adb7d8cd21ea4578ebfa5cbae6a6/diff:/var/lib/docker/overlay2/ccd68a33cfb3faead1e5b4385b11360c5b56778be7cdbe4efa2227562e8ddcb1/diff:/var/lib/docker/overlay2/74545a493ce056cee52ab09c3b4f220df28d765423d6b46ea239beb8dc5db2ef/diff:/var/lib/docker/overlay2/753aa6aaf840b5186887dd205ebd62e8710d5ccabe5170548680c6e559445c2a/diff:/var/lib/docker/overlay2/791a458f173b8bb0dcdb9d18488941b8cf19c4cb83afb22d0a1bacc9675a7654/diff:/var/lib/docker/overlay2/5458881ce1af74e401eb3a10606457d34825e34d90ef078277b1d964e7edb783/diff:/var/lib/docker/overlay2/03176f6e10f98e9bb8d69fb37b851b04cebee2c2ba458ba838dd363a0315cbab/diff:/var/lib/docker/overlay2/d27ebe6d556402a77e23f3b194246c9a208d71be67f92bcc1ce6604a32fe721d/diff:/var/lib/docker/overlay2/a0ceb7b63b2bc5cda2cc5445514898b332ca9cdcad2f73fa2035bca40e4
eaeae/diff:/var/lib/docker/overlay2/a7ec7247df2102087f04843233e5ba5cde0c4b60d27fb0569d4ec464928c509f/diff:/var/lib/docker/overlay2/8e48faefb8da020dbe9ffb682c540ffba4471404365c81b535f9dede181ff881/diff:/var/lib/docker/overlay2/e8dd2220075dfa1cdc1cf293daa1451d0a290c5d44378fbc7baecd5b67e12ef2/diff:/var/lib/docker/overlay2/cd999289e9e588853eb66d5862d1187afd3ace57b8f7b499ce99e6c9187d5543/diff:/var/lib/docker/overlay2/110bae2d3fa1ae298f0583dd243b411b79d8c104e55efd5dd4815c308b0b3208/diff:/var/lib/docker/overlay2/a667174986afec4ae0097f61dba34c02ba97ff6da900874c7b6c9276d2907fa4/diff:/var/lib/docker/overlay2/51327ffe92a17b372d59dbcd6875765f88c1ba0c4a6690d2f70c100d5201a353/diff:/var/lib/docker/overlay2/83b73e1c1aa1081d71e6f2c9710707bd523127816a44f4417a384d4b4e619fbb/diff:/var/lib/docker/overlay2/81486e42af37b860bd9c67a17c8f61366893b7477e9ec207373ec068cfd5e93f/diff:/var/lib/docker/overlay2/270d94a7876a2998769ca7a5234ebae1b59a1723fa38b22080253eed3ef983e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e1d89f5a44b64a2eddd19fe199210723ed8a79b867c0c53efe83d13a3492d70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e1d89f5a44b64a2eddd19fe199210723ed8a79b867c0c53efe83d13a3492d70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e1d89f5a44b64a2eddd19fe199210723ed8a79b867c0c53efe83d13a3492d70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220718015649-4043",
	                "Source": "/var/lib/docker/volumes/test-preload-20220718015649-4043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220718015649-4043",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220718015649-4043",
	                "name.minikube.sigs.k8s.io": "test-preload-20220718015649-4043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be86852cf03c8e273b8e94776ba8fbce424bb7818c8b36f875c2a450243646d5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55351"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55347"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55348"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55349"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55350"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be86852cf03c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220718015649-4043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "450f5c07457f",
	                        "test-preload-20220718015649-4043"
	                    ],
	                    "NetworkID": "64f9ab3c336d287e92bb5687fb8d536ea525bc457303b9e18b95133e500b1635",
	                    "EndpointID": "f659ce3d06a25ceaac1c0836f410308425e17c306f6d780f83357fd4074b6d7f",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220718015649-4043 -n test-preload-20220718015649-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220718015649-4043 -n test-preload-20220718015649-4043: exit status 6 (424.782585ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 02:01:13.843396   12946 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220718015649-4043" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220718015649-4043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220718015649-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220718015649-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220718015649-4043: (2.55828734s)
--- FAIL: TestPreload (266.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (96.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1646196662.exe start -p running-upgrade-20220718020814-4043 --memory=2200 --vm-driver=docker 
E0718 02:08:16.740808    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1646196662.exe start -p running-upgrade-20220718020814-4043 --memory=2200 --vm-driver=docker : exit status 70 (35.251067811s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220718020814-4043] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig392769385
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "running-upgrade-20220718020814-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220718020814-4043", then "minikube start -p running-upgrade-20220718020814-4043 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 72.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 122.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 226.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiBE0718 02:08:20.149596   14910 cache.go:114] Error downloading kic a
rtifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14606
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3503512156/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3503512156/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3503512156/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1646196662.exe start -p running-upgrade-20220718020814-4043 --memory=2200 --vm-driver=docker 
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3503512156/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
E0718 02:08:52.695003    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
* Starting control plane node minikube in cluster minikube
* Download complete!

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1646196662.exe start -p running-upgrade-20220718020814-4043 --memory=2200 --vm-driver=docker : exit status 70 (29.007475562s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220718020814-4043] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2424804428
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220718020814-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "running-upgrade-20220718020814-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220718020814-4043", then "minikube start -p running-upgrade-20220718020814-4043 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 124.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 389.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiBE0718 02:08:55.553640   15249 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1646196662.exe start -p running-upgrade-20220718020814-4043 --memory=2200 --vm-driver=docker 
E0718 02:09:28.572596    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1646196662.exe start -p running-upgrade-20220718020814-4043 --memory=2200 --vm-driver=docker : exit status 70 (29.385293803s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220718020814-4043] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3708684549
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220718020814-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "running-upgrade-20220718020814-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220718020814-4043", then "minikube start -p running-upgrade-20220718020814-4043 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 150.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 367.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiBE0718 02:09:25.848607   15455 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from D
ocker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220718020814-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-07-18 02:09:51.118744 -0700 PDT m=+2639.937578828
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220718020814-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect running-upgrade-20220718020814-4043: exit status 1 (64.386556ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220718020814-4043 -n running-upgrade-20220718020814-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220718020814-4043 -n running-upgrade-20220718020814-4043: exit status 7 (116.216457ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 02:09:51.298509   15601 status.go:247] status error: host: state: unknown state "running-upgrade-20220718020814-4043": docker container inspect running-upgrade-20220718020814-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-20220718020814-4043" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-20220718020814-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220718020814-4043
--- FAIL: TestRunningBinaryUpgrade (96.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (57.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220718020505-4043 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0718 02:05:13.678585    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 02:05:29.670567    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220718020505-4043 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 80 (41.779612168s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220718020505-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220718020505-4043 in cluster kubernetes-upgrade-20220718020505-4043
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20220718020505-4043" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:05:05.881014   14011 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:05:05.881293   14011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:05:05.881300   14011 out.go:309] Setting ErrFile to fd 2...
	I0718 02:05:05.881306   14011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:05:05.881470   14011 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:05:05.882220   14011 out.go:303] Setting JSON to false
	I0718 02:05:05.901283   14011 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3878,"bootTime":1658131227,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:05:05.901503   14011 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:05:05.923385   14011 out.go:177] * [kubernetes-upgrade-20220718020505-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:05:05.965425   14011 notify.go:193] Checking for updates...
	I0718 02:05:05.987024   14011 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:05:06.008303   14011 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:05:06.029292   14011 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:05:06.050076   14011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:05:06.071253   14011 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:05:06.092615   14011 config.go:178] Loaded profile config "missing-upgrade-20220718020414-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0718 02:05:06.092659   14011 driver.go:360] Setting default libvirt URI to qemu:///system
	I0718 02:05:06.183633   14011 docker.go:137] docker version: linux-20.10.17
	I0718 02:05:06.183790   14011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 02:05:06.343480   14011 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:61 SystemTime:2022-07-18 09:05:06.252234699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 02:05:06.364679   14011 out.go:177] * Using the docker driver based on user configuration
	I0718 02:05:06.385706   14011 start.go:284] selected driver: docker
	I0718 02:05:06.385720   14011 start.go:808] validating driver "docker" against <nil>
	I0718 02:05:06.385739   14011 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 02:05:06.388338   14011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 02:05:06.548759   14011 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:61 SystemTime:2022-07-18 09:05:06.460195051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 02:05:06.548918   14011 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0718 02:05:06.549082   14011 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 02:05:06.570828   14011 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 02:05:06.592334   14011 cni.go:95] Creating CNI manager for ""
	I0718 02:05:06.592351   14011 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 02:05:06.592361   14011 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220718020505-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220718020505-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 02:05:06.613451   14011 out.go:177] * Starting control plane node kubernetes-upgrade-20220718020505-4043 in cluster kubernetes-upgrade-20220718020505-4043
	I0718 02:05:06.655300   14011 cache.go:120] Beginning downloading kic base image for docker with docker
	I0718 02:05:06.676459   14011 out.go:177] * Pulling base image ...
	I0718 02:05:06.718305   14011 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0718 02:05:06.718342   14011 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0718 02:05:06.789783   14011 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0718 02:05:06.789853   14011 cache.go:57] Caching tarball of preloaded images
	I0718 02:05:06.790247   14011 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0718 02:05:06.832455   14011 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0718 02:05:06.834141   14011 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0718 02:05:06.853274   14011 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 02:05:06.853291   14011 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0718 02:05:06.951058   14011 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0718 02:05:08.837574   14011 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 02:05:08.837712   14011 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 02:05:09.378235   14011 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0718 02:05:09.378334   14011 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/kubernetes-upgrade-20220718020505-4043/config.json ...
	I0718 02:05:09.378387   14011 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/kubernetes-upgrade-20220718020505-4043/config.json: {Name:mk05d22cd7a1534160d9b4cf0f732347b8bb1c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 02:05:09.378729   14011 cache.go:208] Successfully downloaded all kic artifacts
	I0718 02:05:09.378758   14011 start.go:352] acquiring machines lock for kubernetes-upgrade-20220718020505-4043: {Name:mk33d0dc7ce892d156d6b2c6533592876c1b7641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 02:05:09.378877   14011 start.go:356] acquired machines lock for "kubernetes-upgrade-20220718020505-4043" in 112.469µs
	I0718 02:05:09.378899   14011 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220718020505-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220718020505
-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 02:05:09.379004   14011 start.go:131] createHost starting for "" (driver="docker")
	I0718 02:05:09.434121   14011 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 02:05:09.434492   14011 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220718020505-4043" (driver="docker")
	I0718 02:05:09.434535   14011 client.go:168] LocalClient.Create starting
	I0718 02:05:09.434671   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem
	I0718 02:05:09.434752   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:09.434778   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:09.434866   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem
	I0718 02:05:09.434917   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:09.434936   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:09.435705   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:09.502447   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:09.502550   14011 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220718020505-4043] to gather additional debugging logs...
	I0718 02:05:09.502572   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043
	W0718 02:05:09.568640   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:09.568666   14011 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220718020505-4043]: docker network inspect kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.568703   14011 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220718020505-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I0718 02:05:09.568807   14011 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:09.634924   14011 cli_runner.go:211] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:09.635041   14011 network_create.go:272] running [docker network inspect bridge] to gather additional debugging logs...
	I0718 02:05:09.635059   14011 cli_runner.go:164] Run: docker network inspect bridge
	W0718 02:05:09.704016   14011 cli_runner.go:211] docker network inspect bridge returned with exit code 1
	I0718 02:05:09.704041   14011 network_create.go:275] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.704055   14011 network_create.go:277] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W0718 02:05:09.704063   14011 network_create.go:84] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.704388   14011 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000bc24f0] misses:0}
	I0718 02:05:09.704413   14011 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 02:05:09.704428   14011 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
	I0718 02:05:09.704507   14011 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043
	W0718 02:05:09.772588   14011 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	E0718 02:05:09.772648   14011 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24: create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:09.772794   14011 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:09.772872   14011 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	W0718 02:05:09.844618   14011 cli_runner.go:211] docker ps -a --format {{.Names}} returned with exit code 1
	W0718 02:05:09.844655   14011 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.844859   14011 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true
	W0718 02:05:09.913005   14011 cli_runner.go:211] docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0718 02:05:09.913045   14011 client.go:171] LocalClient.Create took 478.49591ms
	I0718 02:05:11.913941   14011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 02:05:11.914067   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:11.988029   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:11.988126   14011 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:12.265639   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:12.335923   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:12.336004   14011 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:12.878606   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:12.946515   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:12.946624   14011 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:13.602176   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:13.673489   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:13.673589   14011 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:13.673627   14011 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:13.673685   14011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 02:05:13.673730   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:13.739385   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:13.739468   14011 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:13.973018   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:14.041496   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:14.041576   14011 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:14.488948   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:14.560570   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:14.560649   14011 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:14.879215   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:14.947270   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:14.947372   14011 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:15.501818   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:15.575379   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:15.575464   14011 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:15.575483   14011 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:15.575490   14011 start.go:134] duration metric: createHost completed in 6.196408641s
	I0718 02:05:15.575496   14011 start.go:81] releasing machines lock for "kubernetes-upgrade-20220718020505-4043", held for 6.196537796s
	W0718 02:05:15.575509   14011 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:15.575945   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:15.640448   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:15.640502   14011 delete.go:82] Unable to get host status for kubernetes-upgrade-20220718020505-4043, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:15.640682   14011 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:15.640693   14011 start.go:617] Will try again in 5 seconds ...
	I0718 02:05:20.640916   14011 start.go:352] acquiring machines lock for kubernetes-upgrade-20220718020505-4043: {Name:mk33d0dc7ce892d156d6b2c6533592876c1b7641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 02:05:20.641163   14011 start.go:356] acquired machines lock for "kubernetes-upgrade-20220718020505-4043" in 120.382µs
	I0718 02:05:20.641200   14011 start.go:94] Skipping create...Using existing machine configuration
	I0718 02:05:20.641215   14011 fix.go:55] fixHost starting: 
	I0718 02:05:20.641611   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:20.710225   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:20.710264   14011 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220718020505-4043: state= err=unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.710290   14011 fix.go:108] machineExists: false. err=machine does not exist
	I0718 02:05:20.732234   14011 out.go:177] * docker "kubernetes-upgrade-20220718020505-4043" container is missing, will recreate.
	I0718 02:05:20.775948   14011 delete.go:124] DEMOLISHING kubernetes-upgrade-20220718020505-4043 ...
	I0718 02:05:20.776130   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:20.842120   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	W0718 02:05:20.842163   14011 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.842191   14011 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.842557   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:20.914136   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:20.914177   14011 delete.go:82] Unable to get host status for kubernetes-upgrade-20220718020505-4043, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.914263   14011 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043
	W0718 02:05:20.982625   14011 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:20.982655   14011 kic.go:356] could not find the container kubernetes-upgrade-20220718020505-4043 to remove it. will try anyways
	I0718 02:05:20.982720   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:21.047978   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	W0718 02:05:21.048019   14011 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:21.048087   14011 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220718020505-4043 /bin/bash -c "sudo init 0"
	W0718 02:05:21.114260   14011 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220718020505-4043 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 02:05:21.114288   14011 oci.go:646] error shutdown kubernetes-upgrade-20220718020505-4043: docker exec --privileged -t kubernetes-upgrade-20220718020505-4043 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.115735   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:22.186072   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:22.186116   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.186127   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:22.186146   14011 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.586788   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:22.654097   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:22.657177   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.657193   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:22.657220   14011 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:23.419888   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:23.489173   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:23.491066   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:23.491079   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:23.491100   14011 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:24.971358   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:25.039434   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:25.039478   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:25.039494   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:25.039514   14011 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:26.247265   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:26.317225   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:26.319173   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:26.319188   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:26.319218   14011 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:28.550829   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:28.617634   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:28.619457   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:28.619468   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:28.619490   14011 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:31.727884   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:31.797407   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:31.797458   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:31.797474   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:31.797495   14011 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:37.318027   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:37.388483   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:37.390504   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:37.390521   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:37.390551   14011 oci.go:88] couldn't shut down kubernetes-upgrade-20220718020505-4043 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	 
	I0718 02:05:37.390623   14011 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220718020505-4043
	W0718 02:05:37.459211   14011 cli_runner.go:211] docker rm -f -v kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:37.459340   14011 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043
	W0718 02:05:37.524457   14011 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:37.524549   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:37.589410   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:37.589512   14011 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220718020505-4043] to gather additional debugging logs...
	I0718 02:05:37.589538   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043
	W0718 02:05:37.653892   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:37.653920   14011 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220718020505-4043]: docker network inspect kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:37.653939   14011 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220718020505-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W0718 02:05:37.653949   14011 network_create.go:302] Error inspecting docker network kubernetes-upgrade-20220718020505-4043: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:37.654229   14011 delete.go:139] delete failed (probably ok) <nil>
	I0718 02:05:37.654237   14011 fix.go:115] Sleeping 1 second for extra luck!
	I0718 02:05:38.655374   14011 start.go:131] createHost starting for "" (driver="docker")
	I0718 02:05:38.677542   14011 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 02:05:38.677718   14011 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220718020505-4043" (driver="docker")
	I0718 02:05:38.677763   14011 client.go:168] LocalClient.Create starting
	I0718 02:05:38.677893   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem
	I0718 02:05:38.677973   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:38.677997   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:38.678080   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem
	I0718 02:05:38.678129   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:38.678154   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:38.699007   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:38.765839   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:38.767879   14011 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220718020505-4043] to gather additional debugging logs...
	I0718 02:05:38.767897   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043
	W0718 02:05:38.831642   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:38.831667   14011 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220718020505-4043]: docker network inspect kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:38.831692   14011 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220718020505-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I0718 02:05:38.831773   14011 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:38.896729   14011 cli_runner.go:211] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:38.896834   14011 network_create.go:272] running [docker network inspect bridge] to gather additional debugging logs...
	I0718 02:05:38.896858   14011 cli_runner.go:164] Run: docker network inspect bridge
	W0718 02:05:38.961365   14011 cli_runner.go:211] docker network inspect bridge returned with exit code 1
	I0718 02:05:38.961390   14011 network_create.go:275] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:38.961402   14011 network_create.go:277] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W0718 02:05:38.961409   14011 network_create.go:84] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:38.961682   14011 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bc24f0] amended:false}} dirty:map[] misses:0}
	I0718 02:05:38.961717   14011 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 02:05:38.961924   14011 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bc24f0] amended:true}} dirty:map[192.168.49.0:0xc000bc24f0 192.168.58.0:0xc00071a3d0] misses:0}
	I0718 02:05:38.961936   14011 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 02:05:38.961943   14011 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0 ...
	I0718 02:05:38.962009   14011 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043
	W0718 02:05:39.028633   14011 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	E0718 02:05:39.028678   14011 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24: create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:39.028828   14011 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:39.028910   14011 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	W0718 02:05:39.095712   14011 cli_runner.go:211] docker ps -a --format {{.Names}} returned with exit code 1
	W0718 02:05:39.095746   14011 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:39.098126   14011 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true
	W0718 02:05:39.160939   14011 cli_runner.go:211] docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0718 02:05:39.160971   14011 client.go:171] LocalClient.Create took 483.197179ms
	I0718 02:05:41.166399   14011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 02:05:41.166553   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:41.232994   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:41.235129   14011 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:41.435817   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:41.507303   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:41.507401   14011 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:41.951927   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:42.023724   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:42.027141   14011 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:42.432620   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:42.503933   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:42.504019   14011 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:43.099737   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:43.172560   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:43.172673   14011 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:43.172694   14011 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:43.172743   14011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 02:05:43.172786   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:43.236956   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:43.237065   14011 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:43.507159   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:43.582917   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:43.583012   14011 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:44.096338   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:44.165088   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:44.165172   14011 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:44.611875   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:44.680091   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:44.680193   14011 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:44.680213   14011 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:44.680221   14011 start.go:134] duration metric: createHost completed in 6.024703477s
	I0718 02:05:44.680290   14011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 02:05:44.680335   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:44.749458   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:44.749561   14011 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:45.065113   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:45.133889   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:45.133985   14011 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:45.399625   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:45.469630   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:45.469717   14011 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.240189   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.309621   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:46.309723   14011 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:46.309747   14011 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.309799   14011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 02:05:46.309845   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.376929   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:46.379466   14011 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.635673   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.702129   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:46.702218   14011 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.900677   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.974060   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:46.974140   14011 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:47.346863   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:47.417816   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:47.417922   14011 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:47.417938   14011 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:47.417945   14011 fix.go:57] fixHost completed within 26.776412308s
	I0718 02:05:47.417958   14011 start.go:81] releasing machines lock for "kubernetes-upgrade-20220718020505-4043", held for 26.776461983s
	W0718 02:05:47.418121   14011 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220718020505-4043" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220718020505-4043" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:47.460569   14011 out.go:177] 
	W0718 02:05:47.481968   14011 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:47.481995   14011 out.go:239] * 
	* 
	W0718 02:05:47.483095   14011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 02:05:47.545773   14011 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220718020505-4043 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220718020505-4043
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220718020505-4043: exit status 82 (14.690016805s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20220718020505-4043"  ...
	* Stopping node "kubernetes-upgrade-20220718020505-4043"  ...
	* Stopping node "kubernetes-upgrade-20220718020505-4043"  ...
	* Stopping node "kubernetes-upgrade-20220718020505-4043"  ...
	* Stopping node "kubernetes-upgrade-20220718020505-4043"  ...
	* Stopping node "kubernetes-upgrade-20220718020505-4043"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20220718020505-4043 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220718020505-4043 failed: exit status 82
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-07-18 02:06:02.304686 -0700 PDT m=+2411.136636736
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220718020505-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20220718020505-4043: exit status 1 (64.792056ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220718020505-4043 -n kubernetes-upgrade-20220718020505-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220718020505-4043 -n kubernetes-upgrade-20220718020505-4043: exit status 7 (115.629495ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 02:06:02.484327   14257 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220718020505-4043" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220718020505-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220718020505-4043
--- FAIL: TestKubernetesUpgrade (57.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (239.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1347721722.exe start -p missing-upgrade-20220718020414-4043 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1347721722.exe start -p missing-upgrade-20220718020414-4043 --memory=2200 --driver=docker : exit status 78 (2m45.186921211s)

                                                
                                                
-- stdout --
	! [missing-upgrade-20220718020414-4043] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220718020414-4043
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220718020414-4043" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.53 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.54 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 269.90 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 396.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 440.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 486.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-18 09:04:52.862353168 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [CREATE_TIMEOUT] Failed to start docker container. "minikube start -p missing-upgrade-20220718020414-4043" may fix it. creating host: create host timed out in 120.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1347721722.exe start -p missing-upgrade-20220718020414-4043 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1347721722.exe start -p missing-upgrade-20220718020414-4043 --memory=2200 --driver=docker : exit status 70 (32.706832267s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220718020414-4043] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220718020414-4043
	* Pulling base image ...
	* docker "missing-upgrade-20220718020414-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* docker "missing-upgrade-20220718020414-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E0718 02:07:03.913600   14404 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220718020414-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20220718020414-4043" may fix it.: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220718020414-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1347721722.exe start -p missing-upgrade-20220718020414-4043 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1347721722.exe start -p missing-upgrade-20220718020414-4043 --memory=2200 --driver=docker : exit status 70 (38.277886873s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220718020414-4043] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220718020414-4043
	* Pulling base image ...
	* docker "missing-upgrade-20220718020414-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* docker "missing-upgrade-20220718020414-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E0718 02:07:38.192556   14636 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220718020414-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20220718020414-4043" may fix it.: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220718020414-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-07-18 02:08:14.132763 -0700 PDT m=+2542.963140133
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220718020414-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect missing-upgrade-20220718020414-4043: exit status 1 (68.296862ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220718020414-4043 -n missing-upgrade-20220718020414-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220718020414-4043 -n missing-upgrade-20220718020414-4043: exit status 7 (133.489989ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 02:08:14.333561   14886 status.go:247] status error: host: state: unknown state "missing-upgrade-20220718020414-4043": docker container inspect missing-upgrade-20220718020414-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20220718020414-4043" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20220718020414-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220718020414-4043
--- FAIL: TestMissingContainerUpgrade (239.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (155.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2054135174.exe start -p stopped-upgrade-20220718020603-4043 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2054135174.exe start -p stopped-upgrade-20220718020603-4043 --memory=2200 --vm-driver=docker : exit status 70 (1m20.243660016s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220718020603-4043] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig4081701830
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20220718020603-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220718020603-4043", then "minikube start -p stopped-upgrade-20220718020603-4043 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 232.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 242.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 297.74 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 438.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiBE0718 02:06:12.466788   14277 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2054135174.exe start -p stopped-upgrade-20220718020603-4043 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2054135174.exe start -p stopped-upgrade-20220718020603-4043 --memory=2200 --vm-driver=docker : exit status 70 (38.272342735s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220718020603-4043] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1104362938
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20220718020603-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20220718020603-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220718020603-4043", then "minikube start -p stopped-upgrade-20220718020603-4043 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E0718 02:07:28.310875   14549 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2054135174.exe start -p stopped-upgrade-20220718020603-4043 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2054135174.exe start -p stopped-upgrade-20220718020603-4043 --memory=2200 --vm-driver=docker : exit status 70 (34.253003778s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220718020603-4043] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3874980473
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20220718020603-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20220718020603-4043" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220718020603-4043", then "minikube start -p stopped-upgrade-20220718020603-4043 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E0718 02:08:07.348109   14813 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220718020603-4043 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (155.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220718020603-4043
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220718020603-4043: exit status 80 (481.277424ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |------------|------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|----------|---------|---------------------|---------------------|
	|  Command   |                                                                   Args                                                                   |                 Profile                  |   User   | Version |     Start Time      |      End Time       |
	|------------|------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|----------|---------|---------------------|---------------------|
	| cp         | multinode-20220718014905-4043 cp multinode-20220718014905-4043-m02:/home/docker/cp-test.txt                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1821558859/001/cp-test_multinode-20220718014905-4043-m02.txt |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m02                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| cp         | multinode-20220718014905-4043 cp multinode-20220718014905-4043-m02:/home/docker/cp-test.txt                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | multinode-20220718014905-4043:/home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043.txt                   |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m02                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 sudo cat                                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | /home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043.txt                                                 |                                          |          |         |                     |                     |
	| cp         | multinode-20220718014905-4043 cp multinode-20220718014905-4043-m02:/home/docker/cp-test.txt                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | multinode-20220718014905-4043-m03:/home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043-m03.txt           |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m02                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 sudo cat                                                          | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | /home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043-m03.txt                                             |                                          |          |         |                     |                     |
	| cp         | multinode-20220718014905-4043 cp testdata/cp-test.txt                                                                                    | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | multinode-20220718014905-4043-m03:/home/docker/cp-test.txt                                                                               |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m03                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| cp         | multinode-20220718014905-4043 cp multinode-20220718014905-4043-m03:/home/docker/cp-test.txt                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1821558859/001/cp-test_multinode-20220718014905-4043-m03.txt |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m03                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| cp         | multinode-20220718014905-4043 cp multinode-20220718014905-4043-m03:/home/docker/cp-test.txt                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | multinode-20220718014905-4043:/home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043.txt                   |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m03                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 sudo cat                                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | /home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043.txt                                                 |                                          |          |         |                     |                     |
	| cp         | multinode-20220718014905-4043 cp multinode-20220718014905-4043-m03:/home/docker/cp-test.txt                                              | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | multinode-20220718014905-4043-m02:/home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043-m02.txt           |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | ssh -n                                                                                                                                   |                                          |          |         |                     |                     |
	|            | multinode-20220718014905-4043-m03                                                                                                        |                                          |          |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                        |                                          |          |         |                     |                     |
	| ssh        | multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 sudo cat                                                          | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:51 PDT |
	|            | /home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043-m02.txt                                             |                                          |          |         |                     |                     |
	| node       | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:51 PDT | 18 Jul 22 01:52 PDT |
	|            | node stop m03                                                                                                                            |                                          |          |         |                     |                     |
	| node       | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:52 PDT | 18 Jul 22 01:52 PDT |
	|            | node start m03                                                                                                                           |                                          |          |         |                     |                     |
	|            | --alsologtostderr                                                                                                                        |                                          |          |         |                     |                     |
	| node       | list -p                                                                                                                                  | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:52 PDT |                     |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:52 PDT | 18 Jul 22 01:53 PDT |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:53 PDT | 18 Jul 22 01:54 PDT |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	|            | --wait=true -v=8                                                                                                                         |                                          |          |         |                     |                     |
	|            | --alsologtostderr                                                                                                                        |                                          |          |         |                     |                     |
	| node       | list -p                                                                                                                                  | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:54 PDT |                     |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	| node       | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:54 PDT | 18 Jul 22 01:54 PDT |
	|            | node delete m03                                                                                                                          |                                          |          |         |                     |                     |
	| stop       | multinode-20220718014905-4043                                                                                                            | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:54 PDT | 18 Jul 22 01:55 PDT |
	|            | stop                                                                                                                                     |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:55 PDT | 18 Jul 22 01:56 PDT |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	|            | --wait=true -v=8                                                                                                                         |                                          |          |         |                     |                     |
	|            | --alsologtostderr                                                                                                                        |                                          |          |         |                     |                     |
	|            | --driver=docker                                                                                                                          |                                          |          |         |                     |                     |
	| node       | list -p                                                                                                                                  | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT |                     |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | multinode-20220718014905-4043-m02        | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT |                     |
	|            | multinode-20220718014905-4043-m02                                                                                                        |                                          |          |         |                     |                     |
	|            | --driver=docker                                                                                                                          |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | multinode-20220718014905-4043-m03        | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT | 18 Jul 22 01:56 PDT |
	|            | multinode-20220718014905-4043-m03                                                                                                        |                                          |          |         |                     |                     |
	|            | --driver=docker                                                                                                                          |                                          |          |         |                     |                     |
	| node       | add -p                                                                                                                                   | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT |                     |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | multinode-20220718014905-4043-m03        | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT | 18 Jul 22 01:56 PDT |
	|            | multinode-20220718014905-4043-m03                                                                                                        |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | multinode-20220718014905-4043            | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT | 18 Jul 22 01:56 PDT |
	|            | multinode-20220718014905-4043                                                                                                            |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | test-preload-20220718015649-4043         | jenkins  | v1.26.0 | 18 Jul 22 01:56 PDT |                     |
	|            | test-preload-20220718015649-4043                                                                                                         |                                          |          |         |                     |                     |
	|            | --memory=2200 --alsologtostderr                                                                                                          |                                          |          |         |                     |                     |
	|            | --wait=true --preload=false                                                                                                              |                                          |          |         |                     |                     |
	|            | --driver=docker                                                                                                                          |                                          |          |         |                     |                     |
	|            | --kubernetes-version=v1.17.0                                                                                                             |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | test-preload-20220718015649-4043         | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT | 18 Jul 22 02:01 PDT |
	|            | test-preload-20220718015649-4043                                                                                                         |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT | 18 Jul 22 02:01 PDT |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --memory=2048 --driver=docker                                                                                                            |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 5m                                                                                                                            |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 5m                                                                                                                            |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 5m                                                                                                                            |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 15s                                                                                                                           |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 15s                                                                                                                           |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 15s                                                                                                                           |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:01 PDT | 18 Jul 22 02:01 PDT |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --cancel-scheduled                                                                                                                       |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:02 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 15s                                                                                                                           |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:02 PDT |                     |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 15s                                                                                                                           |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:02 PDT | 18 Jul 22 02:02 PDT |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --schedule 15s                                                                                                                           |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | scheduled-stop-20220718020116-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:02 PDT | 18 Jul 22 02:02 PDT |
	|            | scheduled-stop-20220718020116-4043                                                                                                       |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | skaffold-20220718020259-4043             | jenkins  | v1.26.0 | 18 Jul 22 02:03 PDT | 18 Jul 22 02:03 PDT |
	|            | skaffold-20220718020259-4043                                                                                                             |                                          |          |         |                     |                     |
	|            | --memory=2600 --driver=docker                                                                                                            |                                          |          |         |                     |                     |
	| docker-env | --shell none -p                                                                                                                          | skaffold-20220718020259-4043             | skaffold | v1.26.0 | 18 Jul 22 02:03 PDT | 18 Jul 22 02:03 PDT |
	|            | skaffold-20220718020259-4043                                                                                                             |                                          |          |         |                     |                     |
	|            | --user=skaffold                                                                                                                          |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | skaffold-20220718020259-4043             | jenkins  | v1.26.0 | 18 Jul 22 02:03 PDT | 18 Jul 22 02:04 PDT |
	|            | skaffold-20220718020259-4043                                                                                                             |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | insufficient-storage-20220718020400-4043 | jenkins  | v1.26.0 | 18 Jul 22 02:04 PDT |                     |
	|            | insufficient-storage-20220718020400-4043                                                                                                 |                                          |          |         |                     |                     |
	|            | --memory=2048 --output=json --wait=true                                                                                                  |                                          |          |         |                     |                     |
	|            | --driver=docker                                                                                                                          |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | insufficient-storage-20220718020400-4043 | jenkins  | v1.26.0 | 18 Jul 22 02:04 PDT | 18 Jul 22 02:04 PDT |
	|            | insufficient-storage-20220718020400-4043                                                                                                 |                                          |          |         |                     |                     |
	| delete     | -p flannel-20220718020413-4043                                                                                                           | flannel-20220718020413-4043              | jenkins  | v1.26.0 | 18 Jul 22 02:04 PDT | 18 Jul 22 02:04 PDT |
	| start      | -p                                                                                                                                       | offline-docker-20220718020413-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:04 PDT | 18 Jul 22 02:05 PDT |
	|            | offline-docker-20220718020413-4043                                                                                                       |                                          |          |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                                                                   |                                          |          |         |                     |                     |
	|            | --memory=2048 --wait=true                                                                                                                |                                          |          |         |                     |                     |
	|            | --driver=docker                                                                                                                          |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | custom-flannel-20220718020414-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:04 PDT | 18 Jul 22 02:04 PDT |
	|            | custom-flannel-20220718020414-4043                                                                                                       |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | offline-docker-20220718020413-4043       | jenkins  | v1.26.0 | 18 Jul 22 02:05 PDT | 18 Jul 22 02:05 PDT |
	|            | offline-docker-20220718020413-4043                                                                                                       |                                          |          |         |                     |                     |
	| start      | -p                                                                                                                                       | kubernetes-upgrade-20220718020505-4043   | jenkins  | v1.26.0 | 18 Jul 22 02:05 PDT |                     |
	|            | kubernetes-upgrade-20220718020505-4043                                                                                                   |                                          |          |         |                     |                     |
	|            | --memory=2200                                                                                                                            |                                          |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0                                                                                                             |                                          |          |         |                     |                     |
	|            | --alsologtostderr -v=1 --driver=docker                                                                                                   |                                          |          |         |                     |                     |
	|            |                                                                                                                                          |                                          |          |         |                     |                     |
	| stop       | -p                                                                                                                                       | kubernetes-upgrade-20220718020505-4043   | jenkins  | v1.26.0 | 18 Jul 22 02:05 PDT |                     |
	|            | kubernetes-upgrade-20220718020505-4043                                                                                                   |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | kubernetes-upgrade-20220718020505-4043   | jenkins  | v1.26.0 | 18 Jul 22 02:06 PDT | 18 Jul 22 02:06 PDT |
	|            | kubernetes-upgrade-20220718020505-4043                                                                                                   |                                          |          |         |                     |                     |
	| delete     | -p                                                                                                                                       | missing-upgrade-20220718020414-4043      | jenkins  | v1.26.0 | 18 Jul 22 02:08 PDT | 18 Jul 22 02:08 PDT |
	|            | missing-upgrade-20220718020414-4043                                                                                                      |                                          |          |         |                     |                     |
	|------------|------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/18 02:05:05
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 02:05:05.881014   14011 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:05:05.881293   14011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:05:05.881300   14011 out.go:309] Setting ErrFile to fd 2...
	I0718 02:05:05.881306   14011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:05:05.881470   14011 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:05:05.882220   14011 out.go:303] Setting JSON to false
	I0718 02:05:05.901283   14011 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3878,"bootTime":1658131227,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:05:05.901503   14011 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:05:05.923385   14011 out.go:177] * [kubernetes-upgrade-20220718020505-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:05:05.965425   14011 notify.go:193] Checking for updates...
	I0718 02:05:05.987024   14011 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:05:06.008303   14011 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:05:06.029292   14011 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:05:06.050076   14011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:05:06.071253   14011 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:05:06.092615   14011 config.go:178] Loaded profile config "missing-upgrade-20220718020414-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0718 02:05:06.092659   14011 driver.go:360] Setting default libvirt URI to qemu:///system
	I0718 02:05:06.183633   14011 docker.go:137] docker version: linux-20.10.17
	I0718 02:05:06.183790   14011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 02:05:06.343480   14011 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:61 SystemTime:2022-07-18 09:05:06.252234699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 02:05:06.364679   14011 out.go:177] * Using the docker driver based on user configuration
	I0718 02:05:06.385706   14011 start.go:284] selected driver: docker
	I0718 02:05:06.385720   14011 start.go:808] validating driver "docker" against <nil>
	I0718 02:05:06.385739   14011 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 02:05:06.388338   14011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 02:05:06.548759   14011 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:61 SystemTime:2022-07-18 09:05:06.460195051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 02:05:06.548918   14011 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0718 02:05:06.549082   14011 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 02:05:06.570828   14011 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 02:05:06.592334   14011 cni.go:95] Creating CNI manager for ""
	I0718 02:05:06.592351   14011 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 02:05:06.592361   14011 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220718020505-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220718020505-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 02:05:06.613451   14011 out.go:177] * Starting control plane node kubernetes-upgrade-20220718020505-4043 in cluster kubernetes-upgrade-20220718020505-4043
	I0718 02:05:06.655300   14011 cache.go:120] Beginning downloading kic base image for docker with docker
	I0718 02:05:06.676459   14011 out.go:177] * Pulling base image ...
	I0718 02:05:06.718305   14011 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0718 02:05:06.718342   14011 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0718 02:05:06.789783   14011 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0718 02:05:06.789853   14011 cache.go:57] Caching tarball of preloaded images
	I0718 02:05:06.790247   14011 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0718 02:05:06.832455   14011 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0718 02:05:06.834141   14011 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0718 02:05:06.853274   14011 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 02:05:06.853291   14011 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0718 02:05:06.951058   14011 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0718 02:05:08.837574   14011 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 02:05:08.837712   14011 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 02:05:09.378235   14011 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0718 02:05:09.378334   14011 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/kubernetes-upgrade-20220718020505-4043/config.json ...
	I0718 02:05:09.378387   14011 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/kubernetes-upgrade-20220718020505-4043/config.json: {Name:mk05d22cd7a1534160d9b4cf0f732347b8bb1c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 02:05:09.378729   14011 cache.go:208] Successfully downloaded all kic artifacts
	I0718 02:05:09.378758   14011 start.go:352] acquiring machines lock for kubernetes-upgrade-20220718020505-4043: {Name:mk33d0dc7ce892d156d6b2c6533592876c1b7641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 02:05:09.378877   14011 start.go:356] acquired machines lock for "kubernetes-upgrade-20220718020505-4043" in 112.469µs
	I0718 02:05:09.378899   14011 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220718020505-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220718020505
-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 02:05:09.379004   14011 start.go:131] createHost starting for "" (driver="docker")
	I0718 02:05:09.434121   14011 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 02:05:09.434492   14011 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220718020505-4043" (driver="docker")
	I0718 02:05:09.434535   14011 client.go:168] LocalClient.Create starting
	I0718 02:05:09.434671   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem
	I0718 02:05:09.434752   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:09.434778   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:09.434866   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem
	I0718 02:05:09.434917   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:09.434936   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:09.435705   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:09.502447   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:09.502550   14011 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220718020505-4043] to gather additional debugging logs...
	I0718 02:05:09.502572   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043
	W0718 02:05:09.568640   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:09.568666   14011 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220718020505-4043]: docker network inspect kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.568703   14011 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220718020505-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I0718 02:05:09.568807   14011 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:09.634924   14011 cli_runner.go:211] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:09.635041   14011 network_create.go:272] running [docker network inspect bridge] to gather additional debugging logs...
	I0718 02:05:09.635059   14011 cli_runner.go:164] Run: docker network inspect bridge
	W0718 02:05:09.704016   14011 cli_runner.go:211] docker network inspect bridge returned with exit code 1
	I0718 02:05:09.704041   14011 network_create.go:275] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.704055   14011 network_create.go:277] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W0718 02:05:09.704063   14011 network_create.go:84] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.704388   14011 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000bc24f0] misses:0}
	I0718 02:05:09.704413   14011 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 02:05:09.704428   14011 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
	I0718 02:05:09.704507   14011 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043
	W0718 02:05:09.772588   14011 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	E0718 02:05:09.772648   14011 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24: create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:09.772794   14011 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220718020505-4043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:09.772872   14011 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	W0718 02:05:09.844618   14011 cli_runner.go:211] docker ps -a --format {{.Names}} returned with exit code 1
	W0718 02:05:09.844655   14011 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:09.844859   14011 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true
	W0718 02:05:09.913005   14011 cli_runner.go:211] docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0718 02:05:09.913045   14011 client.go:171] LocalClient.Create took 478.49591ms
	I0718 02:05:11.913941   14011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 02:05:11.914067   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:11.988029   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:11.988126   14011 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:12.265639   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:12.335923   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:12.336004   14011 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:12.878606   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:12.946515   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:12.946624   14011 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:13.602176   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:13.673489   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:13.673589   14011 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:13.673627   14011 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:13.673685   14011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 02:05:13.673730   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:13.739385   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:13.739468   14011 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:13.973018   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:14.041496   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:14.041576   14011 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:14.488948   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:14.560570   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:14.560649   14011 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:14.879215   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:14.947270   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:14.947372   14011 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:15.501818   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:15.575379   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:15.575464   14011 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:15.575483   14011 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:15.575490   14011 start.go:134] duration metric: createHost completed in 6.196408641s
	I0718 02:05:15.575496   14011 start.go:81] releasing machines lock for "kubernetes-upgrade-20220718020505-4043", held for 6.196537796s
	W0718 02:05:15.575509   14011 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:15.575945   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:15.640448   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:15.640502   14011 delete.go:82] Unable to get host status for kubernetes-upgrade-20220718020505-4043, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:15.640682   14011 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:15.640693   14011 start.go:617] Will try again in 5 seconds ...
	I0718 02:05:20.640916   14011 start.go:352] acquiring machines lock for kubernetes-upgrade-20220718020505-4043: {Name:mk33d0dc7ce892d156d6b2c6533592876c1b7641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 02:05:20.641163   14011 start.go:356] acquired machines lock for "kubernetes-upgrade-20220718020505-4043" in 120.382µs
	I0718 02:05:20.641200   14011 start.go:94] Skipping create...Using existing machine configuration
	I0718 02:05:20.641215   14011 fix.go:55] fixHost starting: 
	I0718 02:05:20.641611   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:20.710225   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:20.710264   14011 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220718020505-4043: state= err=unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.710290   14011 fix.go:108] machineExists: false. err=machine does not exist
	I0718 02:05:20.732234   14011 out.go:177] * docker "kubernetes-upgrade-20220718020505-4043" container is missing, will recreate.
	I0718 02:05:20.775948   14011 delete.go:124] DEMOLISHING kubernetes-upgrade-20220718020505-4043 ...
	I0718 02:05:20.776130   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:20.842120   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	W0718 02:05:20.842163   14011 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.842191   14011 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.842557   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:20.914136   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:20.914177   14011 delete.go:82] Unable to get host status for kubernetes-upgrade-20220718020505-4043, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:20.914263   14011 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043
	W0718 02:05:20.982625   14011 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:20.982655   14011 kic.go:356] could not find the container kubernetes-upgrade-20220718020505-4043 to remove it. will try anyways
	I0718 02:05:20.982720   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:21.047978   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	W0718 02:05:21.048019   14011 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:21.048087   14011 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220718020505-4043 /bin/bash -c "sudo init 0"
	W0718 02:05:21.114260   14011 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220718020505-4043 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 02:05:21.114288   14011 oci.go:646] error shutdown kubernetes-upgrade-20220718020505-4043: docker exec --privileged -t kubernetes-upgrade-20220718020505-4043 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.115735   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:22.186072   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:22.186116   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.186127   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:22.186146   14011 retry.go:31] will retry after 400.45593ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.586788   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:22.654097   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:22.657177   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:22.657193   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:22.657220   14011 retry.go:31] will retry after 761.409471ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:23.419888   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:23.489173   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:23.491066   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:23.491079   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:23.491100   14011 retry.go:31] will retry after 1.477844956s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:24.971358   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:25.039434   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:25.039478   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:25.039494   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:25.039514   14011 retry.go:31] will retry after 1.205320285s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:26.247265   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:26.317225   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:26.319173   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:26.319188   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:26.319218   14011 retry.go:31] will retry after 2.22916351s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:28.550829   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:28.617634   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:28.619457   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:28.619468   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:28.619490   14011 retry.go:31] will retry after 3.10606463s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:31.727884   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:31.797407   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:31.797458   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:31.797474   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:31.797495   14011 retry.go:31] will retry after 5.518130445s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:37.318027   14011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}
	W0718 02:05:37.388483   14011 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}} returned with exit code 1
	I0718 02:05:37.390504   14011 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:37.390521   14011 oci.go:660] temporary error: container kubernetes-upgrade-20220718020505-4043 status is  but expect it to be exited
	I0718 02:05:37.390551   14011 oci.go:88] couldn't shut down kubernetes-upgrade-20220718020505-4043 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220718020505-4043": docker container inspect kubernetes-upgrade-20220718020505-4043 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	 
	I0718 02:05:37.390623   14011 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220718020505-4043
	W0718 02:05:37.459211   14011 cli_runner.go:211] docker rm -f -v kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:37.459340   14011 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043
	W0718 02:05:37.524457   14011 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:37.524549   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:37.589410   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:37.589512   14011 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220718020505-4043] to gather additional debugging logs...
	I0718 02:05:37.589538   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043
	W0718 02:05:37.653892   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:37.653920   14011 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220718020505-4043]: docker network inspect kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:37.653939   14011 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220718020505-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W0718 02:05:37.653949   14011 network_create.go:302] Error inspecting docker network kubernetes-upgrade-20220718020505-4043: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:37.654229   14011 delete.go:139] delete failed (probably ok) <nil>
	I0718 02:05:37.654237   14011 fix.go:115] Sleeping 1 second for extra luck!
	I0718 02:05:38.655374   14011 start.go:131] createHost starting for "" (driver="docker")
	I0718 02:05:38.677542   14011 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 02:05:38.677718   14011 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220718020505-4043" (driver="docker")
	I0718 02:05:38.677763   14011 client.go:168] LocalClient.Create starting
	I0718 02:05:38.677893   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/ca.pem
	I0718 02:05:38.677973   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:38.677997   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:38.678080   14011 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/certs/cert.pem
	I0718 02:05:38.678129   14011 main.go:134] libmachine: Decoding PEM data...
	I0718 02:05:38.678154   14011 main.go:134] libmachine: Parsing certificate...
	I0718 02:05:38.699007   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:38.765839   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:38.767879   14011 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220718020505-4043] to gather additional debugging logs...
	I0718 02:05:38.767897   14011 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220718020505-4043
	W0718 02:05:38.831642   14011 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:38.831667   14011 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220718020505-4043]: docker network inspect kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:38.831692   14011 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220718020505-4043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I0718 02:05:38.831773   14011 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 02:05:38.896729   14011 cli_runner.go:211] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 02:05:38.896834   14011 network_create.go:272] running [docker network inspect bridge] to gather additional debugging logs...
	I0718 02:05:38.896858   14011 cli_runner.go:164] Run: docker network inspect bridge
	W0718 02:05:38.961365   14011 cli_runner.go:211] docker network inspect bridge returned with exit code 1
	I0718 02:05:38.961390   14011 network_create.go:275] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:38.961402   14011 network_create.go:277] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W0718 02:05:38.961409   14011 network_create.go:84] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:38.961682   14011 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bc24f0] amended:false}} dirty:map[] misses:0}
	I0718 02:05:38.961717   14011 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 02:05:38.961924   14011 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bc24f0] amended:true}} dirty:map[192.168.49.0:0xc000bc24f0 192.168.58.0:0xc00071a3d0] misses:0}
	I0718 02:05:38.961936   14011 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0718 02:05:38.961943   14011 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0 ...
	I0718 02:05:38.962009   14011 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043
	W0718 02:05:39.028633   14011 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	E0718 02:05:39.028678   14011 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24: create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W0718 02:05:39.028828   14011 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220718020505-4043 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:39.028910   14011 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	W0718 02:05:39.095712   14011 cli_runner.go:211] docker ps -a --format {{.Names}} returned with exit code 1
	W0718 02:05:39.095746   14011 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:39.098126   14011 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true
	W0718 02:05:39.160939   14011 cli_runner.go:211] docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0718 02:05:39.160971   14011 client.go:171] LocalClient.Create took 483.197179ms
	I0718 02:05:41.166399   14011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 02:05:41.166553   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:41.232994   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:41.235129   14011 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:41.435817   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:41.507303   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:41.507401   14011 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:41.951927   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:42.023724   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:42.027141   14011 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:42.432620   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:42.503933   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:42.504019   14011 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:43.099737   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:43.172560   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:43.172673   14011 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:43.172694   14011 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:43.172743   14011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 02:05:43.172786   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:43.236956   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:43.237065   14011 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:43.507159   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:43.582917   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:43.583012   14011 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:44.096338   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:44.165088   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:44.165172   14011 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:44.611875   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:44.680091   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:44.680193   14011 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:44.680213   14011 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:44.680221   14011 start.go:134] duration metric: createHost completed in 6.024703477s
	I0718 02:05:44.680290   14011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 02:05:44.680335   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:44.749458   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:44.749561   14011 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:45.065113   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:45.133889   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:45.133985   14011 retry.go:31] will retry after 264.968498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:45.399625   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:45.469630   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:45.469717   14011 retry.go:31] will retry after 768.000945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.240189   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.309621   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:46.309723   14011 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:46.309747   14011 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.309799   14011 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 02:05:46.309845   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.376929   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:46.379466   14011 retry.go:31] will retry after 255.955077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.635673   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.702129   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:46.702218   14011 retry.go:31] will retry after 198.113656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:46.900677   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:46.974060   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	I0718 02:05:46.974140   14011 retry.go:31] will retry after 370.309656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:47.346863   14011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043
	W0718 02:05:47.417816   14011 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043 returned with exit code 1
	W0718 02:05:47.417922   14011 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:47.417938   14011 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220718020505-4043": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220718020505-4043: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I0718 02:05:47.417945   14011 fix.go:57] fixHost completed within 26.776412308s
	I0718 02:05:47.417958   14011 start.go:81] releasing machines lock for "kubernetes-upgrade-20220718020505-4043", held for 26.776461983s
	W0718 02:05:47.418121   14011 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220718020505-4043" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I0718 02:05:47.460569   14011 out.go:177] 
	W0718 02:05:47.481968   14011 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220718020505-4043 container: docker volume create kubernetes-upgrade-20220718020505-4043 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220718020505-4043 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W0718 02:05:47.481995   14011 out.go:239] * 
	W0718 02:05:47.483095   14011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 02:05:47.545773   14011 out.go:177] 
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20220718020603-4043": docker container inspect stopped-upgrade-20220718020603-4043 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.51s)

                                                
                                    
x
+
TestPause/serial/Start (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220718020840-4043 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20220718020840-4043 --memory=2048 --install-addons=false --wait=all --driver=docker : exit status 69 (485.819919ms)

                                                
                                                
-- stdout --
	* [pause-20220718020840-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-20220718020840-4043 --memory=2048 --install-addons=false --wait=all --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220718020840-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20220718020840-4043: exit status 1 (64.769614ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220718020840-4043 -n pause-20220718020840-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220718020840-4043 -n pause-20220718020840-4043: exit status 85 (115.218962ms)

                                                
                                                
-- stdout --
	* Profile "pause-20220718020840-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20220718020840-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20220718020840-4043" host is not running, skipping log retrieval (state="* Profile \"pause-20220718020840-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20220718020840-4043\"")
--- FAIL: TestPause/serial/Start (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (0.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --driver=docker 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --driver=docker : exit status 69 (517.70799ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220718020841-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220718020841-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220718020841-4043: exit status 1 (67.15482ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043: exit status 85 (141.196215ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20220718020841-4043" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20220718020841-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20220718020841-4043\"")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --driver=docker : exit status 69 (502.204376ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220718020841-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220718020841-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220718020841-4043: exit status 1 (65.753757ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043: exit status 85 (115.619738ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20220718020841-4043" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20220718020841-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20220718020841-4043\"")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (0.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --driver=docker : exit status 69 (573.763212ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220718020841-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220718020841-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220718020841-4043: exit status 1 (64.974718ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043: exit status 85 (115.775086ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20220718020841-4043" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20220718020841-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20220718020841-4043\"")
--- FAIL: TestNoKubernetes/serial/Start (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:175: expected N/A in the profile list for kubernetes version but got : "out/minikube-darwin-amd64 profile list" : 
-- stdout --
	|-------------------------------------|-----------|---------|----|------|---------|---------|-------|--------|
	|               Profile               | VM Driver | Runtime | IP | Port | Version | Status  | Nodes | Active |
	|-------------------------------------|-----------|---------|----|------|---------|---------|-------|--------|
	| running-upgrade-20220718020814-4043 | docker    | docker  |    | 8443 | v1.18.0 | Unknown |     1 |        |
	|-------------------------------------|-----------|---------|----|------|---------|---------|-------|--------|

                                                
                                                
-- /stdout --
** stderr ** 
	! Found 2 invalid profile(s) ! 
	* 	 NoKubernetes-20220718020841-4043
	* 	 multinode-20220718014905-4043-m02
	* You can delete them using the following command(s): 
		 $ minikube delete -p NoKubernetes-20220718020841-4043 
		 $ minikube delete -p multinode-20220718014905-4043-m02 

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220718020841-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220718020841-4043: exit status 1 (65.477253ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043: exit status 85 (115.896532ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20220718020841-4043" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20220718020841-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20220718020841-4043\"")
--- FAIL: TestNoKubernetes/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220718020841-4043
no_kubernetes_test.go:158: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p NoKubernetes-20220718020841-4043: exit status 85 (135.63379ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
no_kubernetes_test.go:160: Failed to stop minikube "out/minikube-darwin-amd64 stop -p NoKubernetes-20220718020841-4043" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220718020841-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220718020841-4043: exit status 1 (64.259772ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043: exit status 85 (115.617007ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20220718020841-4043" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20220718020841-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20220718020841-4043\"")
--- FAIL: TestNoKubernetes/serial/Stop (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --driver=docker 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --driver=docker : exit status 69 (578.084772ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220718020841-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220718020841-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220718020841-4043: exit status 1 (68.377447ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20220718020841-4043 -n NoKubernetes-20220718020841-4043: exit status 85 (141.047897ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20220718020841-4043" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20220718020841-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20220718020841-4043\"")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : exit status 69 (541.051885ms)

                                                
                                                
-- stdout --
	* [auto-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:54.812350   15677 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:54.812506   15677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:54.812512   15677 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:54.812516   15677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:54.812613   15677 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:54.813100   15677 out.go:303] Setting JSON to false
	I0718 02:09:54.827960   15677 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4167,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:54.828053   15677 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:54.849617   15677 out.go:177] * [auto-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:54.892694   15677 notify.go:193] Checking for updates...
	I0718 02:09:54.914265   15677 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:54.935428   15677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:54.957660   15677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:54.979586   15677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:55.001412   15677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:55.022910   15677 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:55.089125   15677 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:55.110584   15677 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:55.152587   15677 start.go:284] selected driver: docker
	I0718 02:09:55.152612   15677 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:55.152642   15677 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:55.174468   15677 out.go:177] 
	W0718 02:09:55.196316   15677 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:55.196528   15677 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:55.260337   15677 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/auto/Start (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 69 (495.757405ms)

                                                
                                                
-- stdout --
	* [kindnet-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:55.991803   15699 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:55.991962   15699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:55.991967   15699 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:55.991971   15699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:55.992078   15699 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:55.992575   15699 out.go:303] Setting JSON to false
	I0718 02:09:56.008363   15699 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4169,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:56.008453   15699 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:56.029995   15699 out.go:177] * [kindnet-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:56.072449   15699 notify.go:193] Checking for updates...
	I0718 02:09:56.093858   15699 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:56.114976   15699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:56.136258   15699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:56.158056   15699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:56.179279   15699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:56.201354   15699 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:56.267218   15699 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:56.288495   15699 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:56.330581   15699 start.go:284] selected driver: docker
	I0718 02:09:56.330606   15699 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:56.330634   15699 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:56.352242   15699 out.go:177] 
	W0718 02:09:56.373730   15699 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:56.373844   15699 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:56.395593   15699 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/kindnet/Start (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cilium-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : exit status 69 (541.58661ms)

                                                
                                                
-- stdout --
	* [cilium-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:57.129219   15723 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:57.129399   15723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:57.129406   15723 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:57.129410   15723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:57.129507   15723 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:57.130045   15723 out.go:303] Setting JSON to false
	I0718 02:09:57.145867   15723 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4170,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:57.145956   15723 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:57.167449   15723 out.go:177] * [cilium-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:57.210552   15723 notify.go:193] Checking for updates...
	I0718 02:09:57.232374   15723 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:57.274996   15723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:57.317273   15723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:57.359317   15723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:57.381064   15723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:57.402875   15723 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:57.468958   15723 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:57.490785   15723 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:57.512311   15723 start.go:284] selected driver: docker
	I0718 02:09:57.512342   15723 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:57.512371   15723 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:57.534530   15723 out.go:177] 
	W0718 02:09:57.556691   15723 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:57.556796   15723 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:57.578348   15723 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/cilium/Start (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 69 (524.111377ms)

                                                
                                                
-- stdout --
	* [calico-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:58.312647   15747 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:58.312814   15747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:58.312819   15747 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:58.312823   15747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:58.312922   15747 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:58.313418   15747 out.go:303] Setting JSON to false
	I0718 02:09:58.328685   15747 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4171,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:58.328801   15747 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:58.349239   15747 out.go:177] * [calico-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:58.392574   15747 notify.go:193] Checking for updates...
	I0718 02:09:58.414266   15747 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:58.436105   15747 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:58.458617   15747 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:58.480641   15747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:58.502419   15747 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:58.530671   15747 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:58.595420   15747 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:58.616080   15747 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:58.658926   15747 start.go:284] selected driver: docker
	I0718 02:09:58.658952   15747 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:58.658979   15747 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:58.680903   15747 out.go:177] 
	W0718 02:09:58.702742   15747 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:58.702818   15747 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:58.744755   15747 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/calico/Start (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-20220718020414-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : exit status 69 (518.888538ms)

                                                
                                                
-- stdout --
	* [false-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:09:59.475422   15769 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:09:59.475549   15769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:59.475554   15769 out.go:309] Setting ErrFile to fd 2...
	I0718 02:09:59.475557   15769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:09:59.475666   15769 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:09:59.476152   15769 out.go:303] Setting JSON to false
	I0718 02:09:59.491070   15769 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4172,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:09:59.491198   15769 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:09:59.512807   15769 out.go:177] * [false-20220718020414-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:09:59.554706   15769 notify.go:193] Checking for updates...
	I0718 02:09:59.576549   15769 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:09:59.597465   15769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:09:59.618877   15769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:09:59.640906   15769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:09:59.664333   15769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:09:59.684786   15769 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:09:59.750933   15769 docker.go:113] docker version returned error: exit status 1
	I0718 02:09:59.772323   15769 out.go:177] * Using the docker driver based on user configuration
	I0718 02:09:59.815393   15769 start.go:284] selected driver: docker
	I0718 02:09:59.815446   15769 start.go:808] validating driver "docker" against <nil>
	I0718 02:09:59.815475   15769 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:09:59.837087   15769 out.go:177] 
	W0718 02:09:59.858508   15769 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:09:59.858624   15769 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:09:59.902140   15769 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/false/Start (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p bridge-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : exit status 69 (519.729075ms)

                                                
                                                
-- stdout --
	* [bridge-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:00.640164   15791 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:00.640378   15791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:00.640383   15791 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:00.640387   15791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:00.640494   15791 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:00.640994   15791 out.go:303] Setting JSON to false
	I0718 02:10:00.655963   15791 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4173,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:00.656087   15791 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:00.677371   15791 out.go:177] * [bridge-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:00.720261   15791 notify.go:193] Checking for updates...
	I0718 02:10:00.742197   15791 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:00.763166   15791 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:00.784176   15791 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:00.805072   15791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:00.826619   15791 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:00.848562   15791 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:00.914828   15791 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:00.936160   15791 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:00.979661   15791 start.go:284] selected driver: docker
	I0718 02:10:00.979688   15791 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:00.979716   15791 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:01.001860   15791 out.go:177] 
	W0718 02:10:01.023747   15791 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:01.023818   15791 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:01.065876   15791 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : exit status 69 (522.965182ms)

                                                
                                                
-- stdout --
	* [enable-default-cni-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:01.818203   15813 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:01.818398   15813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:01.818404   15813 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:01.818407   15813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:01.818505   15813 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:01.818996   15813 out.go:303] Setting JSON to false
	I0718 02:10:01.834182   15813 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4174,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:01.834264   15813 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:01.855896   15813 out.go:177] * [enable-default-cni-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:01.898926   15813 notify.go:193] Checking for updates...
	I0718 02:10:01.920614   15813 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:01.941412   15813 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:01.962900   15813 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:01.984947   15813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:02.006772   15813 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:02.029106   15813 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:02.096014   15813 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:02.117844   15813 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:02.160539   15813 start.go:284] selected driver: docker
	I0718 02:10:02.160573   15813 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:02.160603   15813 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:02.182708   15813 out.go:177] 
	W0718 02:10:02.204766   15813 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:02.204923   15813 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:02.247682   15813 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20220718020413-4043 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : exit status 69 (497.197019ms)

                                                
                                                
-- stdout --
	* [kubenet-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:02.974920   15837 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:02.975073   15837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:02.975079   15837 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:02.975083   15837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:02.975178   15837 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:02.975657   15837 out.go:303] Setting JSON to false
	I0718 02:10:02.990702   15837 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4175,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:02.990810   15837 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:03.013226   15837 out.go:177] * [kubenet-20220718020413-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:03.055788   15837 notify.go:193] Checking for updates...
	I0718 02:10:03.077631   15837 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:03.098643   15837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:03.120091   15837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:03.141899   15837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:03.163831   15837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:03.185263   15837 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:03.252205   15837 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:03.273338   15837 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:03.294733   15837 start.go:284] selected driver: docker
	I0718 02:10:03.294752   15837 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:03.294782   15837 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:03.315968   15837 out.go:177] 
	W0718 02:10:03.337250   15837 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:03.337325   15837 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:03.358043   15837 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220718021004-4043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220718021004-4043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 69 (500.908833ms)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220718021004-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:04.110455   15861 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:04.110704   15861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:04.110709   15861 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:04.110713   15861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:04.110817   15861 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:04.111308   15861 out.go:303] Setting JSON to false
	I0718 02:10:04.126162   15861 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4177,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:04.126263   15861 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:04.147787   15861 out.go:177] * [old-k8s-version-20220718021004-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:04.190719   15861 notify.go:193] Checking for updates...
	I0718 02:10:04.212409   15861 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:04.233752   15861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:04.255640   15861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:04.276884   15861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:04.298832   15861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:04.320884   15861 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:04.388152   15861 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:04.410038   15861 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:04.453562   15861 start.go:284] selected driver: docker
	I0718 02:10:04.453590   15861 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:04.453617   15861 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:04.475447   15861 out.go:177] 
	W0718 02:10:04.496924   15861 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:04.497013   15861 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:04.518506   15861 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220718021004-4043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.514803ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (115.030437ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220718021004-4043 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220718021004-4043 create -f testdata/busybox.yaml: exit status 1 (29.407893ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220718021004-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220718021004-4043 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.65196ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (115.933311ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (66.938556ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (115.464667ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220718021004-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220718021004-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (162.528991ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "old-k8s-version-20220718021004-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220718021004-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220718021004-4043 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220718021004-4043 describe deploy/metrics-server -n kube-system: exit status 1 (28.951863ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220718021004-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220718021004-4043 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.593172ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (119.217306ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220718021004-4043 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p old-k8s-version-20220718021004-4043 --alsologtostderr -v=3: exit status 85 (115.609494ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:05.574023   15887 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:05.574210   15887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:05.574215   15887 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:05.574222   15887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:05.574330   15887 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:05.574630   15887 out.go:303] Setting JSON to false
	I0718 02:10:05.574749   15887 mustload.go:65] Loading cluster: old-k8s-version-20220718021004-4043
	I0718 02:10:05.596030   15887 out.go:177] * Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:05.617384   15887 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p old-k8s-version-20220718021004-4043 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (64.659299ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (115.896908ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (135.376396ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 85 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\""*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220718021004-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220718021004-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (140.947615ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "old-k8s-version-20220718021004-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220718021004-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.161019ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (116.271699ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220718021004-4043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220718021004-4043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 69 (497.482193ms)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220718021004-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:06.329935   15901 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:06.330121   15901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:06.330127   15901 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:06.330131   15901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:06.330234   15901 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:06.330646   15901 out.go:303] Setting JSON to false
	I0718 02:10:06.345515   15901 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4179,"bootTime":1658131227,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:06.345604   15901 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:06.366450   15901 out.go:177] * [old-k8s-version-20220718021004-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:06.409568   15901 notify.go:193] Checking for updates...
	I0718 02:10:06.431495   15901 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:06.453156   15901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:06.474621   15901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:06.496249   15901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:06.517428   15901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:06.539778   15901 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:06.605882   15901 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:06.627733   15901 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:06.670407   15901 start.go:284] selected driver: docker
	I0718 02:10:06.670439   15901 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:06.670473   15901 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:06.692304   15901 out.go:177] 
	W0718 02:10:06.714022   15901 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:06.714097   15901 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:06.735289   15901 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220718021004-4043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (66.910278ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (114.937106ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220718021004-4043" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.945351ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (116.478988ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220718021004-4043" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220718021004-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220718021004-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.134661ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220718021004-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220718021004-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.139112ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (136.367017ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20220718021004-4043 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-20220718021004-4043 "sudo crictl images -o json": exit status 85 (123.005175ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-20220718021004-4043 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.602108ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (115.892477ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20220718021004-4043 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-20220718021004-4043 --alsologtostderr -v=1: exit status 85 (117.366646ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:07.738692   15928 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:07.738864   15928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:07.738871   15928 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:07.738875   15928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:07.738979   15928 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:07.739270   15928 out.go:303] Setting JSON to false
	I0718 02:10:07.739287   15928 mustload.go:65] Loading cluster: old-k8s-version-20220718021004-4043
	I0718 02:10:07.762563   15928 out.go:177] * Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:07.784227   15928 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p old-k8s-version-20220718021004-4043 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (65.340444ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (116.65915ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220718021004-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220718021004-4043: exit status 1 (68.799976ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220718021004-4043 -n old-k8s-version-20220718021004-4043: exit status 85 (115.960972ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20220718021004-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20220718021004-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20220718021004-4043" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20220718021004-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20220718021004-4043\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220718021009-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0718 02:10:09.536417    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20220718021009-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (479.381604ms)

                                                
                                                
-- stdout --
	* [no-preload-20220718021009-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:09.176051   15965 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:09.176216   15965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:09.176221   15965 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:09.176225   15965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:09.176342   15965 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:09.176869   15965 out.go:303] Setting JSON to false
	I0718 02:10:09.191828   15965 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4182,"bootTime":1658131227,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:09.191922   15965 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:09.214144   15965 out.go:177] * [no-preload-20220718021009-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:09.236084   15965 notify.go:193] Checking for updates...
	I0718 02:10:09.257846   15965 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:09.279040   15965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:09.301114   15965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:09.323307   15965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:09.365903   15965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:09.387558   15965 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:09.453591   15965 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:09.475563   15965 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:09.497160   15965 start.go:284] selected driver: docker
	I0718 02:10:09.497214   15965 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:09.497245   15965 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:09.519347   15965 out.go:177] 
	W0718 02:10:09.541542   15965 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:09.541657   15965 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:09.563007   15965 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p no-preload-20220718021009-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (66.060206ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.589151ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (0.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220718021009-4043 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-20220718021009-4043 create -f testdata/busybox.yaml: exit status 1 (29.068095ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220718021009-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-20220718021009-4043 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (65.180954ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (115.738251ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (67.159814ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (115.075019ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220718021009-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220718021009-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (174.495959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20220718021009-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220718021009-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220718021009-4043 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-20220718021009-4043 describe deploy/metrics-server -n kube-system: exit status 1 (31.475423ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220718021009-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20220718021009-4043 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (64.619793ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (114.900072ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220718021009-4043 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p no-preload-20220718021009-4043 --alsologtostderr -v=3: exit status 85 (116.674824ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:10.622862   15991 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:10.623021   15991 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:10.623026   15991 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:10.623030   15991 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:10.623134   15991 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:10.623435   15991 out.go:303] Setting JSON to false
	I0718 02:10:10.623559   15991 mustload.go:65] Loading cluster: no-preload-20220718021009-4043
	I0718 02:10:10.645378   15991 out.go:177] * Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:10.667439   15991 out.go:177]   To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p no-preload-20220718021009-4043 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (66.001334ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.727369ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.909036ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 85 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\""*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220718021009-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220718021009-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (146.458988ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20220718021009-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220718021009-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (64.550158ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (114.660981ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220718021009-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20220718021009-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (521.052665ms)

                                                
                                                
-- stdout --
	* [no-preload-20220718021009-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:11.369335   16005 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:11.369529   16005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:11.369535   16005 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:11.369539   16005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:11.369637   16005 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:11.370052   16005 out.go:303] Setting JSON to false
	I0718 02:10:11.384902   16005 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4184,"bootTime":1658131227,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:11.384977   16005 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:11.406640   16005 out.go:177] * [no-preload-20220718021009-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:11.448887   16005 notify.go:193] Checking for updates...
	I0718 02:10:11.470570   16005 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:11.491575   16005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:11.517679   16005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:11.538772   16005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:11.560641   16005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:11.581978   16005 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:11.648295   16005 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:11.669717   16005 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:11.711687   16005 start.go:284] selected driver: docker
	I0718 02:10:11.711713   16005 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:11.711751   16005 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:11.733414   16005 out.go:177] 
	W0718 02:10:11.754768   16005 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:11.754875   16005 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:11.776809   16005 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-20220718021009-4043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (68.005679ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (115.485455ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (0.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220718021009-4043" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (66.424518ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.781959ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220718021009-4043" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220718021009-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-20220718021009-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.14936ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220718021009-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20220718021009-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (65.57122ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.025461ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220718021009-4043 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-20220718021009-4043 "sudo crictl images -o json": exit status 85 (119.615172ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-20220718021009-4043 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"
start_stop_delete_test.go:304: v1.24.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.3",
- 	"k8s.gcr.io/kube-proxy:v1.24.3",
- 	"k8s.gcr.io/kube-scheduler:v1.24.3",
- 	"k8s.gcr.io/pause:3.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (65.542617ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.270701ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220718021009-4043 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-20220718021009-4043 --alsologtostderr -v=1: exit status 85 (115.742402ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:12.775373   16035 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:12.775560   16035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:12.775565   16035 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:12.775569   16035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:12.775665   16035 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:12.775960   16035 out.go:303] Setting JSON to false
	I0718 02:10:12.775978   16035 mustload.go:65] Loading cluster: no-preload-20220718021009-4043
	I0718 02:10:12.797459   16035 out.go:177] * Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:12.818832   16035 out.go:177]   To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p no-preload-20220718021009-4043 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (65.356443ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (116.435658ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220718021009-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220718021009-4043: exit status 1 (65.956606ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220718021009-4043 -n no-preload-20220718021009-4043: exit status 85 (115.41038ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220718021009-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220718021009-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220718021009-4043" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220718021009-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220718021009-4043\"")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220718021014-4043 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20220718021014-4043 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (530.090667ms)

                                                
                                                
-- stdout --
	* [embed-certs-20220718021014-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:14.201972   16072 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:14.202213   16072 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:14.202218   16072 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:14.202222   16072 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:14.202317   16072 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:14.202819   16072 out.go:303] Setting JSON to false
	I0718 02:10:14.217580   16072 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4187,"bootTime":1658131227,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:14.217665   16072 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:14.239410   16072 out.go:177] * [embed-certs-20220718021014-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:14.281844   16072 notify.go:193] Checking for updates...
	I0718 02:10:14.303483   16072 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:14.325499   16072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:14.346743   16072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:14.368586   16072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:14.394870   16072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:14.421014   16072 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:14.486968   16072 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:14.508463   16072 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:14.551466   16072 start.go:284] selected driver: docker
	I0718 02:10:14.551492   16072 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:14.551522   16072 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:14.573117   16072 out.go:177] 
	W0718 02:10:14.594556   16072 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:14.594678   16072 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:14.638291   16072 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-20220718021014-4043 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.816773ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (115.567258ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (0.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220718021014-4043 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-20220718021014-4043 create -f testdata/busybox.yaml: exit status 1 (29.508498ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220718021014-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-20220718021014-4043 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.318362ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (115.446794ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.760282ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (179.813714ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220718021014-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220718021014-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (143.499348ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20220718021014-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220718021014-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220718021014-4043 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-20220718021014-4043 describe deploy/metrics-server -n kube-system: exit status 1 (29.217849ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220718021014-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20220718021014-4043 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (64.808308ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (116.278957ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220718021014-4043 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p embed-certs-20220718021014-4043 --alsologtostderr -v=3: exit status 85 (115.60129ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:15.725876   16098 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:15.726099   16098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:15.726104   16098 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:15.726108   16098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:15.726203   16098 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:15.726507   16098 out.go:303] Setting JSON to false
	I0718 02:10:15.726623   16098 mustload.go:65] Loading cluster: embed-certs-20220718021014-4043
	I0718 02:10:15.748437   16098 out.go:177] * Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:15.769779   16098 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p embed-certs-20220718021014-4043 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.720651ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (116.238751ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (118.433835ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 85 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\""*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220718021014-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220718021014-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (140.096541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20220718021014-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220718021014-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (67.571077ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (116.229897ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220718021014-4043 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20220718021014-4043 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (500.301935ms)

                                                
                                                
-- stdout --
	* [embed-certs-20220718021014-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:16.467937   16112 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:16.468110   16112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:16.468114   16112 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:16.468118   16112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:16.468209   16112 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:16.468699   16112 out.go:303] Setting JSON to false
	I0718 02:10:16.483461   16112 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4189,"bootTime":1658131227,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:16.483557   16112 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:16.505096   16112 out.go:177] * [embed-certs-20220718021014-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:16.548652   16112 notify.go:193] Checking for updates...
	I0718 02:10:16.570169   16112 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:16.591434   16112 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:16.613653   16112 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:16.635448   16112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:16.657564   16112 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:16.679803   16112 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:16.746401   16112 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:16.767881   16112 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:16.810999   16112 start.go:284] selected driver: docker
	I0718 02:10:16.811025   16112 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:16.811057   16112 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:16.832806   16112 out.go:177] 
	W0718 02:10:16.854843   16112 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:16.854958   16112 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:16.876800   16112 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p embed-certs-20220718021014-4043 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.840127ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (115.515081ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (0.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220718021014-4043" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.519545ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (115.365664ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220718021014-4043" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220718021014-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-20220718021014-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.330122ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220718021014-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20220718021014-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (63.604902ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (116.258758ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220718021014-4043 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p embed-certs-20220718021014-4043 "sudo crictl images -o json": exit status 85 (116.816419ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p embed-certs-20220718021014-4043 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"
start_stop_delete_test.go:304: v1.24.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.3",
- 	"k8s.gcr.io/kube-proxy:v1.24.3",
- 	"k8s.gcr.io/kube-scheduler:v1.24.3",
- 	"k8s.gcr.io/pause:3.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (66.136398ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (116.292518ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220718021014-4043 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p embed-certs-20220718021014-4043 --alsologtostderr -v=1: exit status 85 (113.194101ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:17.845550   16139 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:17.845754   16139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:17.845760   16139 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:17.845764   16139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:17.845862   16139 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:17.846162   16139 out.go:303] Setting JSON to false
	I0718 02:10:17.846177   16139 mustload.go:65] Loading cluster: embed-certs-20220718021014-4043
	I0718 02:10:17.867173   16139 out.go:177] * Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:17.888151   16139 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p embed-certs-20220718021014-4043 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.204902ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (115.978491ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220718021014-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220718021014-4043: exit status 1 (65.333305ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220718021014-4043 -n embed-certs-20220718021014-4043: exit status 85 (116.250615ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220718021014-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220718021014-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220718021014-4043" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220718021014-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220718021014-4043\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220718021019-4043 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220718021019-4043 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (501.059162ms)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220718021019-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:19.725598   16188 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:19.725830   16188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:19.725835   16188 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:19.725839   16188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:19.725936   16188 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:19.726462   16188 out.go:303] Setting JSON to false
	I0718 02:10:19.741617   16188 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4192,"bootTime":1658131227,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:19.741737   16188 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:19.763204   16188 out.go:177] * [default-k8s-different-port-20220718021019-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:19.806117   16188 notify.go:193] Checking for updates...
	I0718 02:10:19.828109   16188 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:19.849894   16188 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:19.871235   16188 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:19.893330   16188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:19.915163   16188 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:19.937519   16188 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:20.003358   16188 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:20.025136   16188 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:20.068155   16188 start.go:284] selected driver: docker
	I0718 02:10:20.068182   16188 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:20.068250   16188 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:20.090123   16188 out.go:177] 
	W0718 02:10:20.112121   16188 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:20.112275   16188 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:20.134114   16188 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20220718021019-4043 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (64.751973ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (116.562724ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220718021019-4043 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220718021019-4043 create -f testdata/busybox.yaml: exit status 1 (29.990531ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220718021019-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-different-port-20220718021019-4043 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.997577ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (116.619817ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (66.197068ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (115.004528ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220718021019-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220718021019-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (140.860225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20220718021019-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220718021019-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220718021019-4043 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220718021019-4043 describe deploy/metrics-server -n kube-system: exit status 1 (29.577439ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220718021019-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20220718021019-4043 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.232177ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (114.848761ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220718021019-4043 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220718021019-4043 --alsologtostderr -v=3: exit status 85 (115.660369ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:21.155340   16214 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:21.155502   16214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:21.155507   16214 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:21.155511   16214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:21.155616   16214 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:21.155903   16214 out.go:303] Setting JSON to false
	I0718 02:10:21.156021   16214 mustload.go:65] Loading cluster: default-k8s-different-port-20220718021019-4043
	I0718 02:10:21.177843   16214 out.go:177] * Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:21.199854   16214 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220718021019-4043 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (64.964888ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (114.798783ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (114.932163ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 85 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\""*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220718021019-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220718021019-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (139.570301ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20220718021019-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220718021019-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.568016ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (115.355203ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220718021019-4043 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220718021019-4043 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (476.020137ms)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220718021019-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:21.887669   16228 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:21.887838   16228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:21.887843   16228 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:21.887847   16228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:21.887946   16228 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:21.888361   16228 out.go:303] Setting JSON to false
	I0718 02:10:21.904219   16228 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4194,"bootTime":1658131227,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:21.904293   16228 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:21.926462   16228 out.go:177] * [default-k8s-different-port-20220718021019-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:21.947645   16228 notify.go:193] Checking for updates...
	I0718 02:10:21.969129   16228 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:21.990629   16228 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:22.012725   16228 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:22.034502   16228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:22.055744   16228 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:22.078077   16228 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:22.144780   16228 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:22.165914   16228 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:22.207549   16228 start.go:284] selected driver: docker
	I0718 02:10:22.207597   16228 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:22.207622   16228 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:22.228677   16228 out.go:177] 
	W0718 02:10:22.249968   16228 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:22.250102   16228 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:22.271845   16228 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20220718021019-4043 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.544997ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (116.465371ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220718021019-4043" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (66.203793ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (114.449723ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220718021019-4043" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220718021019-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220718021019-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.487657ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220718021019-4043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220718021019-4043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.062576ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (119.750394ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220718021019-4043 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220718021019-4043 "sudo crictl images -o json": exit status 85 (117.756506ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220718021019-4043 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"
start_stop_delete_test.go:304: v1.24.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.3",
- 	"k8s.gcr.io/kube-proxy:v1.24.3",
- 	"k8s.gcr.io/kube-scheduler:v1.24.3",
- 	"k8s.gcr.io/pause:3.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.297415ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (115.886762ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220718021019-4043 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220718021019-4043 --alsologtostderr -v=1: exit status 85 (117.529146ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:23.247366   16257 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:23.247596   16257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:23.247601   16257 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:23.247605   16257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:23.247732   16257 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:23.248034   16257 out.go:303] Setting JSON to false
	I0718 02:10:23.248053   16257 mustload.go:65] Loading cluster: default-k8s-different-port-20220718021019-4043
	I0718 02:10:23.269950   16257 out.go:177] * Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:23.292273   16257 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220718021019-4043 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (65.686331ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (117.564321ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220718021019-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220718021019-4043: exit status 1 (66.174831ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220718021019-4043 -n default-k8s-different-port-20220718021019-4043: exit status 85 (116.960442ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220718021019-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220718021019-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220718021019-4043" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220718021019-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220718021019-4043\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (0.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220718021024-4043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20220718021024-4043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (482.454443ms)

                                                
                                                
-- stdout --
	* [newest-cni-20220718021024-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:24.699604   16292 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:24.699759   16292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:24.699765   16292 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:24.699769   16292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:24.699866   16292 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:24.700361   16292 out.go:303] Setting JSON to false
	I0718 02:10:24.715358   16292 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4197,"bootTime":1658131227,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:24.715460   16292 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:24.737169   16292 out.go:177] * [newest-cni-20220718021024-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:24.759352   16292 notify.go:193] Checking for updates...
	I0718 02:10:24.781351   16292 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:24.808092   16292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:24.830105   16292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:24.852058   16292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:24.873975   16292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:24.896234   16292 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:24.962704   16292 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:24.983791   16292 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:25.025744   16292 start.go:284] selected driver: docker
	I0718 02:10:25.025770   16292 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:25.025796   16292 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:25.047600   16292 out.go:177] 
	W0718 02:10:25.068664   16292 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:25.068756   16292 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:25.089777   16292 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p newest-cni-20220718021024-4043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (130.499916ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (116.518825ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220718021024-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220718021024-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (142.772319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20220718021024-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220718021024-4043 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (66.216019ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (115.286535ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220718021024-4043 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p newest-cni-20220718021024-4043 --alsologtostderr -v=3: exit status 85 (116.30199ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:25.755440   16308 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:25.755626   16308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:25.755631   16308 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:25.755635   16308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:25.755747   16308 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:25.756059   16308 out.go:303] Setting JSON to false
	I0718 02:10:25.756181   16308 mustload.go:65] Loading cluster: newest-cni-20220718021024-4043
	I0718 02:10:25.777902   16308 out.go:177] * Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:25.799788   16308 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p newest-cni-20220718021024-4043 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (67.433008ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (116.224188ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (114.53071ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 85 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\""*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220718021024-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220718021024-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (141.778011ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20220718021024-4043" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220718021024-4043 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (66.05707ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (116.034079ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220718021024-4043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20220718021024-4043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: exit status 69 (520.399264ms)

                                                
                                                
-- stdout --
	* [newest-cni-20220718021024-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:26.497728   16322 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:26.497889   16322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:26.497894   16322 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:26.497898   16322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:26.497997   16322 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:26.498404   16322 out.go:303] Setting JSON to false
	I0718 02:10:26.513395   16322 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4199,"bootTime":1658131227,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 02:10:26.513495   16322 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 02:10:26.535320   16322 out.go:177] * [newest-cni-20220718021024-4043] minikube v1.26.0 on Darwin 12.4
	I0718 02:10:26.577778   16322 notify.go:193] Checking for updates...
	I0718 02:10:26.599299   16322 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 02:10:26.620486   16322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 02:10:26.641773   16322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 02:10:26.663575   16322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 02:10:26.685468   16322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 02:10:26.706894   16322 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 02:10:26.785047   16322 docker.go:113] docker version returned error: exit status 1
	I0718 02:10:26.810134   16322 out.go:177] * Using the docker driver based on user configuration
	I0718 02:10:26.851289   16322 start.go:284] selected driver: docker
	I0718 02:10:26.851315   16322 start.go:808] validating driver "docker" against <nil>
	I0718 02:10:26.851343   16322 start.go:819] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:}
	I0718 02:10:26.879444   16322 out.go:177] 
	W0718 02:10:26.901180   16322 out.go:239] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W0718 02:10:26.901276   16322 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I0718 02:10:26.923100   16322 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p newest-cni-20220718021024-4043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (66.232656ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (116.340582ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (0.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220718021024-4043 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p newest-cni-20220718021024-4043 "sudo crictl images -o json": exit status 85 (139.123921ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p newest-cni-20220718021024-4043 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"
start_stop_delete_test.go:304: v1.24.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.3",
- 	"k8s.gcr.io/kube-proxy:v1.24.3",
- 	"k8s.gcr.io/kube-scheduler:v1.24.3",
- 	"k8s.gcr.io/pause:3.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (66.027725ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (137.944498ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220718021024-4043 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p newest-cni-20220718021024-4043 --alsologtostderr -v=1: exit status 85 (115.706922ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 02:10:27.544777   16340 out.go:296] Setting OutFile to fd 1 ...
	I0718 02:10:27.544943   16340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:27.544948   16340 out.go:309] Setting ErrFile to fd 2...
	I0718 02:10:27.544952   16340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 02:10:27.545051   16340 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 02:10:27.545343   16340 out.go:303] Setting JSON to false
	I0718 02:10:27.545359   16340 mustload.go:65] Loading cluster: newest-cni-20220718021024-4043
	I0718 02:10:27.566501   16340 out.go:177] * Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	I0718 02:10:27.587857   16340 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p newest-cni-20220718021024-4043 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (66.07602ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (118.16536ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220718021024-4043
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220718021024-4043: exit status 1 (66.436213ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220718021024-4043 -n newest-cni-20220718021024-4043: exit status 85 (154.624794ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220718021024-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220718021024-4043"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220718021024-4043" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220718021024-4043\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220718021024-4043\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.52s)
E0718 02:10:29.685749    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
E0718 02:11:31.459750    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory

                                                
                                    

Test pass (149/245)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 78.97
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.33
10 TestDownloadOnly/v1.24.3/json-events 4.31
11 TestDownloadOnly/v1.24.3/preload-exists 0
14 TestDownloadOnly/v1.24.3/kubectl 0
15 TestDownloadOnly/v1.24.3/LogsDuration 0.35
16 TestDownloadOnly/DeleteAll 0.77
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.44
18 TestDownloadOnlyKic 10.05
19 TestBinaryMirror 1.71
20 TestOffline 52.11
22 TestAddons/Setup 164.87
26 TestAddons/parallel/MetricsServer 5.65
27 TestAddons/parallel/HelmTiller 11.23
29 TestAddons/parallel/CSI 47.88
30 TestAddons/parallel/Headlamp 10.27
32 TestAddons/serial/GCPAuth 18.27
33 TestAddons/StoppedEnableDisable 13.06
40 TestHyperKitDriverInstallOrUpdate 5.67
43 TestErrorSpam/setup 29.85
44 TestErrorSpam/start 2.35
45 TestErrorSpam/status 1.36
46 TestErrorSpam/pause 1.97
47 TestErrorSpam/unpause 1.99
48 TestErrorSpam/stop 13.12
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 47.89
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 40.55
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 1.63
59 TestFunctional/serial/CacheCmd/cache/add_remote 8.83
60 TestFunctional/serial/CacheCmd/cache/add_local 1.83
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.08
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
64 TestFunctional/serial/CacheCmd/cache/cache_reload 3.37
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.52
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
68 TestFunctional/serial/ExtraConfig 50.98
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 3.28
71 TestFunctional/serial/LogsFileCmd 3.3
73 TestFunctional/parallel/ConfigCmd 0.48
74 TestFunctional/parallel/DashboardCmd 13.4
75 TestFunctional/parallel/DryRun 1.8
76 TestFunctional/parallel/InternationalLanguage 0.6
77 TestFunctional/parallel/StatusCmd 1.45
80 TestFunctional/parallel/ServiceCmd 14.26
82 TestFunctional/parallel/AddonsCmd 0.3
83 TestFunctional/parallel/PersistentVolumeClaim 25.35
85 TestFunctional/parallel/SSHCmd 1.01
86 TestFunctional/parallel/CpCmd 1.8
87 TestFunctional/parallel/MySQL 26.75
88 TestFunctional/parallel/FileSync 0.49
89 TestFunctional/parallel/CertSync 2.87
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
97 TestFunctional/parallel/Version/short 0.16
98 TestFunctional/parallel/Version/components 0.69
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
103 TestFunctional/parallel/ImageCommands/ImageBuild 6.22
104 TestFunctional/parallel/ImageCommands/Setup 3.95
105 TestFunctional/parallel/DockerEnv/bash 1.78
106 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
107 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.41
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.62
110 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.84
111 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.26
112 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.01
113 TestFunctional/parallel/ImageCommands/ImageRemove 0.89
114 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.15
115 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.75
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.67
117 TestFunctional/parallel/ProfileCmd/profile_list 0.58
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.67
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/MountCmd/any-port 12.09
130 TestFunctional/parallel/MountCmd/specific-port 3.16
131 TestFunctional/delete_addon-resizer_images 0.17
132 TestFunctional/delete_my-image_image 0.07
133 TestFunctional/delete_minikube_cached_images 0.07
143 TestJSONOutput/start/Command 47.79
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.67
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.74
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.44
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.77
168 TestKicCustomNetwork/create_custom_network 33.48
169 TestKicCustomNetwork/use_default_bridge_network 34.58
170 TestKicExistingNetwork 32.65
171 TestKicCustomSubnet 31.94
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 68.83
176 TestMountStart/serial/StartWithMountFirst 7.89
177 TestMountStart/serial/VerifyMountFirst 0.44
178 TestMountStart/serial/StartWithMountSecond 8.08
179 TestMountStart/serial/VerifyMountSecond 0.45
180 TestMountStart/serial/DeleteFirst 2.29
181 TestMountStart/serial/VerifyMountPostDelete 0.44
182 TestMountStart/serial/Stop 1.61
183 TestMountStart/serial/RestartStopped 5.67
184 TestMountStart/serial/VerifyMountPostStop 0.44
187 TestMultiNode/serial/FreshStart2Nodes 110.93
188 TestMultiNode/serial/DeployApp2Nodes 8.76
189 TestMultiNode/serial/PingHostFrom2Pods 0.91
190 TestMultiNode/serial/AddNode 35.23
191 TestMultiNode/serial/ProfileList 0.61
192 TestMultiNode/serial/CopyFile 17.16
193 TestMultiNode/serial/StopNode 14.26
194 TestMultiNode/serial/StartAfterStop 20.08
195 TestMultiNode/serial/RestartKeepsNodes 113.54
196 TestMultiNode/serial/DeleteNode 18.84
197 TestMultiNode/serial/StopMultiNode 25.12
198 TestMultiNode/serial/RestartMultiNode 60.15
199 TestMultiNode/serial/ValidateNameConflict 34.14
205 TestScheduledStopUnix 103.53
206 TestSkaffold 60.71
208 TestInsufficientStorage 13.05
224 TestStoppedBinaryUpgrade/Setup 0.89
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.11
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
246 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.83
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 9.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (78.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220718012551-4043 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220718012551-4043 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (1m18.968475217s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (78.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220718012551-4043
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220718012551-4043: exit status 85 (329.191098ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220718012551-4043 | jenkins | v1.26.0 | 18 Jul 22 01:25 PDT |          |
	|         | download-only-20220718012551-4043 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |         |         |                     |          |
	|         | --container-runtime=docker        |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/18 01:25:51
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 01:25:51.208209    4045 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:25:51.208454    4045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:25:51.208459    4045 out.go:309] Setting ErrFile to fd 2...
	I0718 01:25:51.208463    4045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:25:51.208568    4045 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	W0718 01:25:51.208671    4045 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/config/config.json: no such file or directory
	I0718 01:25:51.209345    4045 out.go:303] Setting JSON to true
	I0718 01:25:51.225410    4045 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1524,"bootTime":1658131227,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 01:25:51.225530    4045 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 01:25:51.248213    4045 out.go:97] [download-only-20220718012551-4043] minikube v1.26.0 on Darwin 12.4
	W0718 01:25:51.248311    4045 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball: no such file or directory
	I0718 01:25:51.248358    4045 notify.go:193] Checking for updates...
	I0718 01:25:51.270181    4045 out.go:169] MINIKUBE_LOCATION=14606
	I0718 01:25:51.292039    4045 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 01:25:51.336150    4045 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 01:25:51.358145    4045 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 01:25:51.380394    4045 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	W0718 01:25:51.423900    4045 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 01:25:51.424308    4045 driver.go:360] Setting default libvirt URI to qemu:///system
	W0718 01:26:50.677775    4045 docker.go:113] docker version returned error: deadline exceeded running "docker version --format {{.Server.Os}}-{{.Server.Version}}": signal: killed
	I0718 01:26:50.700182    4045 out.go:97] Using the docker driver based on user configuration
	I0718 01:26:50.700213    4045 start.go:284] selected driver: docker
	I0718 01:26:50.700222    4045 start.go:808] validating driver "docker" against <nil>
	I0718 01:26:50.700317    4045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:26:50.852267    4045 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:26:50.878894    4045 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0718 01:26:50.899651    4045 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0718 01:26:50.958296    4045 out.go:169] 
	W0718 01:26:50.979578    4045 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0718 01:26:51.000478    4045 out.go:169] 
	I0718 01:26:51.043255    4045 out.go:169] 
	W0718 01:26:51.064269    4045 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0718 01:26:51.064381    4045 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0718 01:26:51.064419    4045 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0718 01:26:51.085087    4045 out.go:169] 
	I0718 01:26:51.106459    4045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:26:51.232505    4045 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0718 01:26:51.253173    4045 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0718 01:26:51.253259    4045 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0718 01:26:51.299357    4045 out.go:169] 
	W0718 01:26:51.320289    4045 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0718 01:26:51.320383    4045 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0718 01:26:51.320417    4045 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0718 01:26:51.341238    4045 out.go:169] 
	I0718 01:26:51.383202    4045 out.go:169] 
	W0718 01:26:51.404352    4045 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0718 01:26:51.425262    4045 out.go:169] 
	I0718 01:26:51.446190    4045 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0718 01:26:51.446319    4045 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 01:26:51.467360    4045 out.go:169] Using Docker Desktop driver with root privileges
	I0718 01:26:51.488346    4045 cni.go:95] Creating CNI manager for ""
	I0718 01:26:51.488379    4045 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0718 01:26:51.488391    4045 start_flags.go:310] config:
	{Name:download-only-20220718012551-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220718012551-4043 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:26:51.509196    4045 out.go:97] Starting control plane node download-only-20220718012551-4043 in cluster download-only-20220718012551-4043
	I0718 01:26:51.509236    4045 cache.go:120] Beginning downloading kic base image for docker with docker
	I0718 01:26:51.530292    4045 out.go:97] Pulling base image ...
	I0718 01:26:51.530337    4045 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0718 01:26:51.530373    4045 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0718 01:26:51.530510    4045 cache.go:107] acquiring lock: {Name:mkb949a99fd957c748a8dd90dc19bbec9cb91f41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.530521    4045 cache.go:107] acquiring lock: {Name:mkc6378b2d9752664cd973c584ccbb8bf7f86d84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.531390    4045 cache.go:107] acquiring lock: {Name:mkb49f6ec06b57944dc18f9f338f05f003d90a04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.531301    4045 cache.go:107] acquiring lock: {Name:mkd0aba3591c49fe190092d0ebfeb4e9b7028e5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.531599    4045 cache.go:107] acquiring lock: {Name:mkbb92d1fd24e369319ac1d28ff379c9365d3a2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.531552    4045 cache.go:107] acquiring lock: {Name:mkbf5dcea02e476e140c0ccabd145b67be4f2f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.531641    4045 cache.go:107] acquiring lock: {Name:mk4a604d3d89a16216da12121f76f88a2802263a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.531717    4045 cache.go:107] acquiring lock: {Name:mk4259d224f6bd30216b1b5d1c098598912acd95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 01:26:51.532255    4045 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 01:26:51.532246    4045 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0718 01:26:51.532451    4045 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0718 01:26:51.532464    4045 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0718 01:26:51.532493    4045 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0718 01:26:51.532507    4045 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/download-only-20220718012551-4043/config.json ...
	I0718 01:26:51.532549    4045 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0718 01:26:51.532572    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/download-only-20220718012551-4043/config.json: {Name:mkc612d90976d25b4c29cbc25fa12c54697e97e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 01:26:51.532612    4045 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0718 01:26:51.532695    4045 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0718 01:26:51.533055    4045 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0718 01:26:51.533450    4045 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0718 01:26:51.533449    4045 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0718 01:26:51.533450    4045 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0718 01:26:51.537552    4045 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.538871    4045 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.539242    4045 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.539550    4045 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.539691    4045 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.540017    4045 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.540243    4045 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.540321    4045 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0718 01:26:51.593796    4045 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0718 01:26:51.593958    4045 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0718 01:26:51.594072    4045 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0718 01:26:52.382270    4045 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0718 01:26:53.444734    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0718 01:26:53.445279    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0718 01:26:53.478133    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0718 01:26:53.481717    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0718 01:26:53.484794    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0718 01:26:53.531537    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0718 01:26:53.585219    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0718 01:26:53.594009    4045 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0718 01:26:53.799708    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0718 01:26:53.799725    4045 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 2.269101539s
	I0718 01:26:53.799736    4045 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0718 01:26:53.956789    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 01:26:53.956804    4045 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.426305461s
	I0718 01:26:53.956813    4045 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 01:26:54.282802    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0718 01:26:54.282818    4045 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 2.7513597s
	I0718 01:26:54.282827    4045 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0718 01:26:54.759394    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0718 01:26:54.759411    4045 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 3.22881207s
	I0718 01:26:54.759422    4045 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0718 01:26:54.889193    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0718 01:26:54.889211    4045 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 3.358617587s
	I0718 01:26:54.889220    4045 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0718 01:26:55.006531    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0718 01:26:55.006547    4045 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 3.475126061s
	I0718 01:26:55.006555    4045 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0718 01:26:55.098839    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0718 01:26:55.098857    4045 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 3.56834017s
	I0718 01:26:55.098867    4045 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0718 01:26:55.467952    4045 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0718 01:26:55.467968    4045 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 3.936578865s
	I0718 01:26:55.467977    4045 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0718 01:26:55.467994    4045 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220718012551-4043"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220718012551-4043 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220718012551-4043 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker : (4.312050757s)
--- PASS: TestDownloadOnly/v1.24.3/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/preload-exists
--- PASS: TestDownloadOnly/v1.24.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/kubectl
--- PASS: TestDownloadOnly/v1.24.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220718012551-4043
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220718012551-4043: exit status 85 (347.778352ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220718012551-4043 | jenkins | v1.26.0 | 18 Jul 22 01:25 PDT |          |
	|         | download-only-20220718012551-4043 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |         |         |                     |          |
	|         | --container-runtime=docker        |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	| start   | -o=json --download-only -p        | download-only-20220718012551-4043 | jenkins | v1.26.0 | 18 Jul 22 01:27 PDT |          |
	|         | download-only-20220718012551-4043 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.24.3      |                                   |         |         |                     |          |
	|         | --container-runtime=docker        |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/18 01:27:10
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 01:27:10.739128    5618 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:27:10.739260    5618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:27:10.739266    5618 out.go:309] Setting ErrFile to fd 2...
	I0718 01:27:10.739269    5618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:27:10.739367    5618 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	W0718 01:27:10.739461    5618 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/config/config.json: no such file or directory
	I0718 01:27:10.739795    5618 out.go:303] Setting JSON to true
	I0718 01:27:10.754704    5618 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1603,"bootTime":1658131227,"procs":368,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 01:27:10.754783    5618 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 01:27:10.777105    5618 out.go:97] [download-only-20220718012551-4043] minikube v1.26.0 on Darwin 12.4
	I0718 01:27:10.777920    5618 notify.go:193] Checking for updates...
	W0718 01:27:10.778187    5618 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/cache/preloaded-tarball: no such file or directory
	I0718 01:27:10.799827    5618 out.go:169] MINIKUBE_LOCATION=14606
	I0718 01:27:10.821695    5618 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 01:27:10.843845    5618 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 01:27:10.865995    5618 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 01:27:10.887930    5618 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220718012551-4043"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.3/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.77s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.77s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220718012551-4043
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.44s)

                                                
                                    
x
+
TestDownloadOnlyKic (10.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220718012716-4043 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220718012716-4043 --force --alsologtostderr --driver=docker : (8.867776936s)
helpers_test.go:175: Cleaning up "download-docker-20220718012716-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220718012716-4043
--- PASS: TestDownloadOnlyKic (10.05s)

                                                
                                    
x
+
TestBinaryMirror (1.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220718012727-4043 --alsologtostderr --binary-mirror http://127.0.0.1:49564 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220718012727-4043 --alsologtostderr --binary-mirror http://127.0.0.1:49564 --driver=docker : (1.023336425s)
helpers_test.go:175: Cleaning up "binary-mirror-20220718012727-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220718012727-4043
--- PASS: TestBinaryMirror (1.71s)

                                                
                                    
x
+
TestOffline (52.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220718020413-4043 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220718020413-4043 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (48.748864752s)
helpers_test.go:175: Cleaning up "offline-docker-20220718020413-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220718020413-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220718020413-4043: (3.362019228s)
--- PASS: TestOffline (52.11s)

                                                
                                    
x
+
TestAddons/Setup (164.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220718012728-4043 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220718012728-4043 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m44.869830473s)
--- PASS: TestAddons/Setup (164.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.274764ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-qz45d" [2044c4ce-e2c6-4af5-82b6-1f66f26be282] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008765218s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220718012728-4043 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.23s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.393778ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-f7tvz" [1ae6fcff-5c39-4a52-b089-b93c0f2f5004] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009871032s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220718012728-4043 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220718012728-4043 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.720006841s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.23s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 3.993537ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220718012728-4043 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-20220718012728-4043 create -f testdata/csi-hostpath-driver/pvc.yaml: (3.015762467s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220718012728-4043 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220718012728-4043 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [61bac8a7-c618-44c9-aa42-356829ef1332] Pending
helpers_test.go:342: "task-pv-pod" [61bac8a7-c618-44c9-aa42-356829ef1332] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [61bac8a7-c618-44c9-aa42-356829ef1332] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.008997709s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220718012728-4043 create -f testdata/csi-hostpath-driver/snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220718012728-4043 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220718012728-4043 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220718012728-4043 delete pod task-pv-pod
addons_test.go:546: (dbg) Done: kubectl --context addons-20220718012728-4043 delete pod task-pv-pod: (1.057542498s)
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220718012728-4043 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220718012728-4043 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220718012728-4043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220718012728-4043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220718012728-4043 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [70cba88a-2015-4661-9118-849d127faf18] Pending
helpers_test.go:342: "task-pv-pod-restore" [70cba88a-2015-4661-9118-849d127faf18] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [70cba88a-2015-4661-9118-849d127faf18] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.013099514s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220718012728-4043 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220718012728-4043 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220718012728-4043 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.891681726s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220718012728-4043 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220718012728-4043 --alsologtostderr -v=1: (1.265506841s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-bbj5z" [4276501d-7abf-4ca2-ac3c-67490d8704ca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-866f5bd7bc-bbj5z" [4276501d-7abf-4ca2-ac3c-67490d8704ca] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.006820396s
--- PASS: TestAddons/parallel/Headlamp (10.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (18.27s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220718012728-4043 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220718012728-4043 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [54c47703-9541-4d3d-92c7-345abd86fdf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [54c47703-9541-4d3d-92c7-345abd86fdf8] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 11.009284355s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220718012728-4043 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220718012728-4043 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220718012728-4043 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220718012728-4043 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220718012728-4043 addons disable gcp-auth --alsologtostderr -v=1: (6.65427666s)
--- PASS: TestAddons/serial/GCPAuth (18.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220718012728-4043
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220718012728-4043: (12.674359722s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220718012728-4043
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220718012728-4043
--- PASS: TestAddons/StoppedEnableDisable (13.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.67s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0718 02:09:08.063655    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (5.67s)

                                                
                                    
x
+
TestErrorSpam/setup (29.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220718013146-4043 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220718013146-4043 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 --driver=docker : (29.851950562s)
--- PASS: TestErrorSpam/setup (29.85s)

                                                
                                    
x
+
TestErrorSpam/start (2.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 start --dry-run
--- PASS: TestErrorSpam/start (2.35s)

                                                
                                    
x
+
TestErrorSpam/status (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 status
--- PASS: TestErrorSpam/status (1.36s)

                                                
                                    
x
+
TestErrorSpam/pause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 pause
--- PASS: TestErrorSpam/pause (1.97s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (13.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 stop: (12.446417257s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220718013146-4043 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220718013146-4043 stop
--- PASS: TestErrorSpam/stop (13.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/files/etc/test/nested/copy/4043/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (47.894496308s)
--- PASS: TestFunctional/serial/StartWithProxy (47.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --alsologtostderr -v=8: (40.549799293s)
functional_test.go:655: soft start took 40.55044684s for "functional-20220718013239-4043" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220718013239-4043 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220718013239-4043 get po -A: (1.632189824s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add k8s.gcr.io/pause:3.1: (2.036736566s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add k8s.gcr.io/pause:3.3: (3.573161034s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add k8s.gcr.io/pause:latest: (3.215713492s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220718013239-4043 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1755618154/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add minikube-local-cache-test:functional-20220718013239-4043
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache add minikube-local-cache-test:functional-20220718013239-4043: (1.302844265s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache delete minikube-local-cache-test:functional-20220718013239-4043
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220718013239-4043
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (448.368864ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 cache reload: (1.970312413s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 kubectl -- --context functional-20220718013239-4043 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220718013239-4043 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0718 01:35:13.631867    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:13.638837    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:13.650826    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:13.671971    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:13.712099    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:13.792399    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:13.954641    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:14.274773    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:14.917079    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:35:16.197322    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.979145603s)
functional_test.go:753: restart took 50.979243254s for "functional-20220718013239-4043" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (50.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220718013239-4043 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 logs
E0718 01:35:18.757890    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 logs: (3.281530403s)
--- PASS: TestFunctional/serial/LogsCmd (3.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2003678508/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2003678508/001/logs.txt: (3.299053529s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 config get cpus: exit status 14 (56.252249ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 config get cpus: exit status 14 (54.339791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220718013239-4043 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220718013239-4043 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 8078: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (823.711985ms)

                                                
                                                
-- stdout --
	* [functional-20220718013239-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:36:34.033935    7961 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:36:34.034224    7961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:36:34.034230    7961 out.go:309] Setting ErrFile to fd 2...
	I0718 01:36:34.034236    7961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:36:34.034382    7961 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:36:34.055431    7961 out.go:303] Setting JSON to false
	I0718 01:36:34.073400    7961 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2167,"bootTime":1658131227,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 01:36:34.073505    7961 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 01:36:34.149464    7961 out.go:177] * [functional-20220718013239-4043] minikube v1.26.0 on Darwin 12.4
	I0718 01:36:34.212409    7961 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 01:36:34.233686    7961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 01:36:34.291497    7961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 01:36:34.333567    7961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 01:36:34.354556    7961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 01:36:34.376188    7961 config.go:178] Loaded profile config "functional-20220718013239-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0718 01:36:34.376554    7961 driver.go:360] Setting default libvirt URI to qemu:///system
	I0718 01:36:34.465553    7961 docker.go:137] docker version: linux-20.10.17
	I0718 01:36:34.465721    7961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:36:34.619302    7961 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-18 08:36:34.533587491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:36:34.662152    7961 out.go:177] * Using the docker driver based on existing profile
	I0718 01:36:34.683182    7961 start.go:284] selected driver: docker
	I0718 01:36:34.683200    7961 start.go:808] validating driver "docker" against &{Name:functional-20220718013239-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220718013239-4043 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:36:34.683344    7961 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 01:36:34.707114    7961 out.go:177] 
	W0718 01:36:34.728141    7961 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0718 01:36:34.749079    7961 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220718013239-4043 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (594.621395ms)

                                                
                                                
-- stdout --
	* [functional-20220718013239-4043] minikube v1.26.0 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:36:21.551395    7758 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:36:21.551708    7758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:36:21.551716    7758 out.go:309] Setting ErrFile to fd 2...
	I0718 01:36:21.551722    7758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:36:21.551996    7758 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:36:21.552674    7758 out.go:303] Setting JSON to false
	I0718 01:36:21.568139    7758 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2154,"bootTime":1658131227,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 01:36:21.568240    7758 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0718 01:36:21.589105    7758 out.go:177] * [functional-20220718013239-4043] minikube v1.26.0 sur Darwin 12.4
	I0718 01:36:21.631975    7758 out.go:177]   - MINIKUBE_LOCATION=14606
	I0718 01:36:21.653133    7758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	I0718 01:36:21.674880    7758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 01:36:21.695934    7758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 01:36:21.716971    7758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	I0718 01:36:21.740150    7758 config.go:178] Loaded profile config "functional-20220718013239-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0718 01:36:21.740504    7758 driver.go:360] Setting default libvirt URI to qemu:///system
	I0718 01:36:21.809088    7758 docker.go:137] docker version: linux-20.10.17
	I0718 01:36:21.809228    7758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 01:36:21.946651    7758 info.go:265] docker info: {ID:3XCN:EKL3:U5PY:ZD5A:6VVE:MMV6:M4TP:6HXY:HZHK:V4M5:PSPW:F7QP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-18 08:36:21.881050029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0718 01:36:21.967614    7758 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0718 01:36:21.989291    7758 start.go:284] selected driver: docker
	I0718 01:36:21.989316    7758 start.go:808] validating driver "docker" against &{Name:functional-20220718013239-4043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220718013239-4043 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0718 01:36:21.989526    7758 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 01:36:22.014363    7758 out.go:177] 
	W0718 01:36:22.036503    7758 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0718 01:36:22.058355    7758 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220718013239-4043 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220718013239-4043 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-rxx8m" [3d1500b7-e6eb-4569-8db2-7241cff3de1d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-54c4b5c49f-rxx8m" [3d1500b7-e6eb-4569-8db2-7241cff3de1d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.009195701s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 service list: (1.06581683s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 service --namespace=default --https --url hello-node
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 service --namespace=default --https --url hello-node: (2.029197392s)
functional_test.go:1475: found endpoint: https://127.0.0.1:51440
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 service hello-node --url --format={{.IP}}: (2.029681237s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 service hello-node --url: (2.027808684s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:51483
--- PASS: TestFunctional/parallel/ServiceCmd (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [450ac4f6-c7a3-406b-9b10-03a8ff4d2d42] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008960754s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220718013239-4043 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220718013239-4043 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220718013239-4043 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220718013239-4043 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [d1b70afb-89da-4099-8a20-8ffe70c72e4b] Pending
helpers_test.go:342: "sp-pod" [d1b70afb-89da-4099-8a20-8ffe70c72e4b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [d1b70afb-89da-4099-8a20-8ffe70c72e4b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008698096s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220718013239-4043 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220718013239-4043 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220718013239-4043 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [8b5c9d0c-9217-43ca-92d9-cc2e13939e58] Pending
helpers_test.go:342: "sp-pod" [8b5c9d0c-9217-43ca-92d9-cc2e13939e58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [8b5c9d0c-9217-43ca-92d9-cc2e13939e58] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010431376s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220718013239-4043 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "cat /etc/hostname"
E0718 01:35:23.879514    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/SSHCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh -n functional-20220718013239-4043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 cp functional-20220718013239-4043:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2857920169/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh -n functional-20220718013239-4043 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220718013239-4043 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-67f7d69d8b-8s8bw" [142f56e3-87a7-4098-9715-204a874a5220] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-8s8bw" [142f56e3-87a7-4098-9715-204a874a5220] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.018045711s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;": exit status 1 (138.136638ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;": exit status 1 (131.695412ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;": exit status 1 (121.957412ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220718013239-4043 exec mysql-67f7d69d8b-8s8bw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/4043/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /etc/test/nested/copy/4043/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/4043.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /etc/ssl/certs/4043.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/4043.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /usr/share/ca-certificates/4043.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/40432.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /etc/ssl/certs/40432.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/40432.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /usr/share/ca-certificates/40432.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220718013239-4043 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo systemctl is-active crio": exit status 1 (460.266613ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220718013239-4043
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/nginx                     | alpine                         | f246e6f9d0b28 | 23.5MB |
| docker.io/kubernetesui/dashboard            | <none>                         | 1042d9e0d8fcc | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/kube-apiserver                   | v1.24.3                        | d521dd763e2e3 | 130MB  |
| k8s.gcr.io/kube-proxy                       | v1.24.3                        | 2ae1ba6417cbc | 110MB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220718013239-4043 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7                            | 459651132a111 | 429MB  |
| gcr.io/k8s-minikube/busybox                 | latest                         | beae173ccac6a | 1.24MB |
| k8s.gcr.io/kube-controller-manager          | v1.24.3                        | 586c112956dfc | 119MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.3                        | 3a5aa3a515f5d | 51MB   |
| docker.io/library/nginx                     | latest                         | 41b0e86104ba6 | 142MB  |
| k8s.gcr.io/etcd                             | 3.5.3-0                        | aebe758cef4cd | 299MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-20220718013239-4043 | db30bf21875b7 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-20220718013239-4043 | 057d7120a6ea3 | 30B    |
| k8s.gcr.io/pause                            | 3.7                            | 221177c6082a8 | 711kB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
|---------------------------------------------|--------------------------------|---------------|--------|
2022/07/18 01:36:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format json:
[{"id":"1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220718013239-4043"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"db30bf21875b7b04740acb373c39525bd53a554a9e9a2cb11f9554941c1b7e32","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220718013239-4043"],"size":"1240000"},{"id":"2ae1ba6417cbcd0b38113927
7508ddbebd0cf055344b710f7ea16e4da954a302","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.3"],"size":"110000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.3"],"size":"130000000"},{"id":"41b0e86104ba681811bf60b4d6970ed24dd59e282b36c352b8a55823bbb5e14a","repoDigests":[],"repoTags":[
"docker.io/library/nginx:latest"],"size":"142000000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"299000000"},{"id":"057d7120a6ea3811e8a606369ae8bbb85d604fef650d36ad45a442d491c614f9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220718013239-4043"],"size":"30"},{"id":"3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.3"],"size":"51000000"},{"id":"f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6b
a6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.3"],"size":"119000000"},{"id":"459651132a1115239f7370765464a0737d028ae7e74c68360740d81751fbae7e","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"429000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls --format yaml:
- id: 586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.3
size: "119000000"
- id: 2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.3
size: "110000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 41b0e86104ba681811bf60b4d6970ed24dd59e282b36c352b8a55823bbb5e14a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.3
size: "130000000"
- id: 3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.3
size: "51000000"
- id: 459651132a1115239f7370765464a0737d028ae7e74c68360740d81751fbae7e
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "429000000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: 057d7120a6ea3811e8a606369ae8bbb85d604fef650d36ad45a442d491c614f9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220718013239-4043
size: "30"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh pgrep buildkitd: exit status 1 (436.122461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image build -t localhost/my-image:functional-20220718013239-4043 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image build -t localhost/my-image:functional-20220718013239-4043 testdata/build: (5.422624497s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image build -t localhost/my-image:functional-20220718013239-4043 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 57171de64465
Removing intermediate container 57171de64465
---> fa72d6e50686
Step 3/3 : ADD content.txt /
---> db30bf21875b
Successfully built db30bf21875b
Successfully tagged localhost/my-image:functional-20220718013239-4043
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.874406446s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220718013239-4043 docker-env) && out/minikube-darwin-amd64 status -p functional-20220718013239-4043"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220718013239-4043 docker-env) && out/minikube-darwin-amd64 status -p functional-20220718013239-4043": (1.105238228s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220718013239-4043 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043: (3.282495064s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
E0718 01:35:34.120088    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043: (2.494097574s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.795035554s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043: (3.98483364s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image save gcr.io/google-containers/addon-resizer:functional-20220718013239-4043 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image save gcr.io/google-containers/addon-resizer:functional-20220718013239-4043 /Users/jenkins/workspace/addon-resizer-save.tar: (2.014090921s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image rm gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.800577643s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220718013239-4043 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220718013239-4043: (2.609998868s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1310: Took "495.435555ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "81.188531ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "553.130938ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "119.213683ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220718013239-4043 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220718013239-4043 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [48359001-d3f5-42ba-afd1-f12726bf9e96] Pending
E0718 01:35:54.600898    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [48359001-d3f5-42ba-afd1-f12726bf9e96] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [48359001-d3f5-42ba-afd1-f12726bf9e96] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009053988s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220718013239-4043 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220718013239-4043 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 7705: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220718013239-4043 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1609832246/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1658133382105667000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1609832246/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1658133382105667000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1609832246/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1658133382105667000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1609832246/001/test-1658133382105667000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (442.811953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 18 08:36 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 18 08:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 18 08:36 test-1658133382105667000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh cat /mount-9p/test-1658133382105667000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220718013239-4043 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [43307975-7be8-4d2e-9ef2-d0d730afdd70] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [43307975-7be8-4d2e-9ef2-d0d730afdd70] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [43307975-7be8-4d2e-9ef2-d0d730afdd70] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [43307975-7be8-4d2e-9ef2-d0d730afdd70] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:342: "busybox-mount" [43307975-7be8-4d2e-9ef2-d0d730afdd70] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.008106158s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220718013239-4043 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220718013239-4043 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1609832246/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220718013239-4043 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3010884771/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (716.214908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "findmnt -T /mount-9p | grep 9p"
E0718 01:36:35.563313    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220718013239-4043 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3010884771/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh "sudo umount -f /mount-9p": exit status 1 (483.762712ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220718013239-4043 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220718013239-4043 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3010884771/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220718013239-4043
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220718013239-4043
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220718013239-4043
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220718014408-4043 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220718014408-4043 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (47.787475364s)
--- PASS: TestJSONOutput/start/Command (47.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220718014408-4043 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220718014408-4043 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220718014408-4043 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220718014408-4043 --output=json --user=testUser: (12.436471866s)
--- PASS: TestJSONOutput/stop/Command (12.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220718014512-4043 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220718014512-4043 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (328.826044ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d3b365cd-7a92-4618-b89c-ddeb31fb4cc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220718014512-4043] minikube v1.26.0 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de84d00f-4a13-4e0c-9aee-a9496d4cd6a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14606"}}
	{"specversion":"1.0","id":"a8e7dbd1-c216-4006-afd6-4c58857889ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig"}}
	{"specversion":"1.0","id":"fbff4423-6af5-4077-9854-e3d716285040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"68ee60c2-1feb-46e1-ace8-128001a2f4e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c585fc6-35df-47b2-945e-e1ca8eba1682","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube"}}
	{"specversion":"1.0","id":"b3e1b40d-2eb7-4c41-b813-fe8a88d5d270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220718014512-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220718014512-4043
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220718014513-4043 --network=
E0718 01:45:13.629771    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:45:29.622720    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220718014513-4043 --network=: (30.678278684s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220718014513-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220718014513-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220718014513-4043: (2.738055771s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220718014547-4043 --network=bridge
E0718 01:45:57.319236    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220718014547-4043 --network=bridge: (31.971258825s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220718014547-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220718014547-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220718014547-4043: (2.538300165s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.58s)

                                                
                                    
x
+
TestKicExistingNetwork (32.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220718014621-4043 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220718014621-4043 --network=existing-network: (29.702659313s)
helpers_test.go:175: Cleaning up "existing-network-20220718014621-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220718014621-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220718014621-4043: (2.527846711s)
--- PASS: TestKicExistingNetwork (32.65s)

                                                
                                    
x
+
TestKicCustomSubnet (31.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220718014654-4043 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220718014654-4043 --subnet=192.168.60.0/24: (29.111699986s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220718014654-4043 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220718014654-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220718014654-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220718014654-4043: (2.762069756s)
--- PASS: TestKicCustomSubnet (31.94s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220718014726-4043 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220718014726-4043 --driver=docker : (30.164477489s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220718014726-4043 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220718014726-4043 --driver=docker : (31.151387205s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220718014726-4043
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220718014726-4043
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220718014726-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220718014726-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220718014726-4043: (2.725620573s)
helpers_test.go:175: Cleaning up "first-20220718014726-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220718014726-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220718014726-4043: (2.701788543s)
--- PASS: TestMinikubeProfile (68.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220718014835-4043 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220718014835-4043 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.891919411s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220718014835-4043 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220718014835-4043 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220718014835-4043 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.077154098s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220718014835-4043 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.29s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220718014835-4043 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220718014835-4043 --alsologtostderr -v=5: (2.290320282s)
--- PASS: TestMountStart/serial/DeleteFirst (2.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220718014835-4043 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220718014835-4043
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220718014835-4043: (1.608967765s)
--- PASS: TestMountStart/serial/Stop (1.61s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220718014835-4043
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220718014835-4043: (4.665053588s)
--- PASS: TestMountStart/serial/RestartStopped (5.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220718014835-4043 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220718014905-4043 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0718 01:50:13.631024    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:50:29.621762    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220718014905-4043 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m50.153856601s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.688049603s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- rollout status deployment/busybox: (5.615955173s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-kcz5g -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-zk52n -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-kcz5g -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-zk52n -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-kcz5g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-zk52n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-kcz5g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-kcz5g -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-zk52n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220718014905-4043 -- exec busybox-d46db594c-zk52n -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220718014905-4043 -v 3 --alsologtostderr
E0718 01:51:36.689566    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220718014905-4043 -v 3 --alsologtostderr: (34.024490254s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr: (1.202903166s)
--- PASS: TestMultiNode/serial/AddNode (35.23s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (17.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --output json --alsologtostderr: (1.140908414s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp testdata/cp-test.txt multinode-20220718014905-4043:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1821558859/001/cp-test_multinode-20220718014905-4043.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043:/home/docker/cp-test.txt multinode-20220718014905-4043-m02:/home/docker/cp-test_multinode-20220718014905-4043_multinode-20220718014905-4043-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 "sudo cat /home/docker/cp-test_multinode-20220718014905-4043_multinode-20220718014905-4043-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043:/home/docker/cp-test.txt multinode-20220718014905-4043-m03:/home/docker/cp-test_multinode-20220718014905-4043_multinode-20220718014905-4043-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 "sudo cat /home/docker/cp-test_multinode-20220718014905-4043_multinode-20220718014905-4043-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp testdata/cp-test.txt multinode-20220718014905-4043-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1821558859/001/cp-test_multinode-20220718014905-4043-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043-m02:/home/docker/cp-test.txt multinode-20220718014905-4043:/home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 "sudo cat /home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043-m02:/home/docker/cp-test.txt multinode-20220718014905-4043-m03:/home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 "sudo cat /home/docker/cp-test_multinode-20220718014905-4043-m02_multinode-20220718014905-4043-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp testdata/cp-test.txt multinode-20220718014905-4043-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1821558859/001/cp-test_multinode-20220718014905-4043-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043-m03:/home/docker/cp-test.txt multinode-20220718014905-4043:/home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043 "sudo cat /home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 cp multinode-20220718014905-4043-m03:/home/docker/cp-test.txt multinode-20220718014905-4043-m02:/home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 ssh -n multinode-20220718014905-4043-m02 "sudo cat /home/docker/cp-test_multinode-20220718014905-4043-m03_multinode-20220718014905-4043-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (17.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 node stop m03: (12.539363746s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status: exit status 7 (857.162223ms)

                                                
                                                
-- stdout --
	multinode-20220718014905-4043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220718014905-4043-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220718014905-4043-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr: exit status 7 (859.123496ms)

                                                
                                                
-- stdout --
	multinode-20220718014905-4043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220718014905-4043-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220718014905-4043-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:52:12.146917   11477 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:52:12.147104   11477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:52:12.147109   11477 out.go:309] Setting ErrFile to fd 2...
	I0718 01:52:12.147113   11477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:52:12.147223   11477 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:52:12.147394   11477 out.go:303] Setting JSON to false
	I0718 01:52:12.147409   11477 mustload.go:65] Loading cluster: multinode-20220718014905-4043
	I0718 01:52:12.147705   11477 config.go:178] Loaded profile config "multinode-20220718014905-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0718 01:52:12.147732   11477 status.go:253] checking status of multinode-20220718014905-4043 ...
	I0718 01:52:12.148110   11477 cli_runner.go:164] Run: docker container inspect multinode-20220718014905-4043 --format={{.State.Status}}
	I0718 01:52:12.221001   11477 status.go:328] multinode-20220718014905-4043 host status = "Running" (err=<nil>)
	I0718 01:52:12.221034   11477 host.go:66] Checking if "multinode-20220718014905-4043" exists ...
	I0718 01:52:12.221302   11477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220718014905-4043
	I0718 01:52:12.296079   11477 host.go:66] Checking if "multinode-20220718014905-4043" exists ...
	I0718 01:52:12.297497   11477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 01:52:12.297551   11477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220718014905-4043
	I0718 01:52:12.370682   11477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53572 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/multinode-20220718014905-4043/id_rsa Username:docker}
	I0718 01:52:12.456949   11477 ssh_runner.go:195] Run: systemctl --version
	I0718 01:52:12.461354   11477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 01:52:12.470333   11477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220718014905-4043
	I0718 01:52:12.543414   11477 kubeconfig.go:92] found "multinode-20220718014905-4043" server: "https://127.0.0.1:53571"
	I0718 01:52:12.543440   11477 api_server.go:165] Checking apiserver status ...
	I0718 01:52:12.543483   11477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 01:52:12.553932   11477 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1500/cgroup
	W0718 01:52:12.562138   11477 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1500/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 01:52:12.562153   11477 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53571/healthz ...
	I0718 01:52:12.567658   11477 api_server.go:266] https://127.0.0.1:53571/healthz returned 200:
	ok
	I0718 01:52:12.567672   11477 status.go:419] multinode-20220718014905-4043 apiserver status = Running (err=<nil>)
	I0718 01:52:12.567684   11477 status.go:255] multinode-20220718014905-4043 status: &{Name:multinode-20220718014905-4043 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 01:52:12.567698   11477 status.go:253] checking status of multinode-20220718014905-4043-m02 ...
	I0718 01:52:12.567969   11477 cli_runner.go:164] Run: docker container inspect multinode-20220718014905-4043-m02 --format={{.State.Status}}
	I0718 01:52:12.639794   11477 status.go:328] multinode-20220718014905-4043-m02 host status = "Running" (err=<nil>)
	I0718 01:52:12.639816   11477 host.go:66] Checking if "multinode-20220718014905-4043-m02" exists ...
	I0718 01:52:12.640092   11477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220718014905-4043-m02
	I0718 01:52:12.712483   11477 host.go:66] Checking if "multinode-20220718014905-4043-m02" exists ...
	I0718 01:52:12.712749   11477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 01:52:12.712815   11477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220718014905-4043-m02
	I0718 01:52:12.785730   11477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53707 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/machines/multinode-20220718014905-4043-m02/id_rsa Username:docker}
	I0718 01:52:12.873576   11477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 01:52:12.882478   11477 status.go:255] multinode-20220718014905-4043-m02 status: &{Name:multinode-20220718014905-4043-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0718 01:52:12.882500   11477 status.go:253] checking status of multinode-20220718014905-4043-m03 ...
	I0718 01:52:12.882753   11477 cli_runner.go:164] Run: docker container inspect multinode-20220718014905-4043-m03 --format={{.State.Status}}
	I0718 01:52:12.954544   11477 status.go:328] multinode-20220718014905-4043-m03 host status = "Stopped" (err=<nil>)
	I0718 01:52:12.954565   11477 status.go:341] host is not running, skipping remaining checks
	I0718 01:52:12.954571   11477 status.go:255] multinode-20220718014905-4043-m03 status: &{Name:multinode-20220718014905-4043-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 node start m03 --alsologtostderr: (18.829973936s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status: (1.133912766s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (20.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220718014905-4043
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220718014905-4043
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220718014905-4043: (37.098492779s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220718014905-4043 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220718014905-4043 --wait=true -v=8 --alsologtostderr: (1m16.337182735s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220718014905-4043
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 node delete m03: (16.393374853s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.523547148s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 stop: (24.756143763s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status: exit status 7 (179.89946ms)

                                                
                                                
-- stdout --
	multinode-20220718014905-4043
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220718014905-4043-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr: exit status 7 (183.917664ms)

                                                
                                                
-- stdout --
	multinode-20220718014905-4043
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220718014905-4043-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 01:55:10.450238   12092 out.go:296] Setting OutFile to fd 1 ...
	I0718 01:55:10.450396   12092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:55:10.450406   12092 out.go:309] Setting ErrFile to fd 2...
	I0718 01:55:10.450410   12092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 01:55:10.450509   12092 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/bin
	I0718 01:55:10.450675   12092 out.go:303] Setting JSON to false
	I0718 01:55:10.450689   12092 mustload.go:65] Loading cluster: multinode-20220718014905-4043
	I0718 01:55:10.451020   12092 config.go:178] Loaded profile config "multinode-20220718014905-4043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0718 01:55:10.451032   12092 status.go:253] checking status of multinode-20220718014905-4043 ...
	I0718 01:55:10.451360   12092 cli_runner.go:164] Run: docker container inspect multinode-20220718014905-4043 --format={{.State.Status}}
	I0718 01:55:10.517689   12092 status.go:328] multinode-20220718014905-4043 host status = "Stopped" (err=<nil>)
	I0718 01:55:10.517715   12092 status.go:341] host is not running, skipping remaining checks
	I0718 01:55:10.517722   12092 status.go:255] multinode-20220718014905-4043 status: &{Name:multinode-20220718014905-4043 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 01:55:10.517749   12092 status.go:253] checking status of multinode-20220718014905-4043-m02 ...
	I0718 01:55:10.518030   12092 cli_runner.go:164] Run: docker container inspect multinode-20220718014905-4043-m02 --format={{.State.Status}}
	I0718 01:55:10.583695   12092 status.go:328] multinode-20220718014905-4043-m02 host status = "Stopped" (err=<nil>)
	I0718 01:55:10.583718   12092 status.go:341] host is not running, skipping remaining checks
	I0718 01:55:10.583739   12092 status.go:255] multinode-20220718014905-4043-m02 status: &{Name:multinode-20220718014905-4043-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220718014905-4043 --wait=true -v=8 --alsologtostderr --driver=docker 
E0718 01:55:13.671221    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/addons-20220718012728-4043/client.crt: no such file or directory
E0718 01:55:29.662674    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/functional-20220718013239-4043/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220718014905-4043 --wait=true -v=8 --alsologtostderr --driver=docker : (57.748224889s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220718014905-4043 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.497798616s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220718014905-4043
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220718014905-4043-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220718014905-4043-m02 --driver=docker : exit status 14 (346.953802ms)

                                                
                                                
-- stdout --
	* [multinode-20220718014905-4043-m02] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220718014905-4043-m02' is duplicated with machine name 'multinode-20220718014905-4043-m02' in profile 'multinode-20220718014905-4043'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220718014905-4043-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220718014905-4043-m03 --driver=docker : (30.430989893s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220718014905-4043
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220718014905-4043: exit status 80 (531.083855ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220718014905-4043
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220718014905-4043-m03 already exists in multinode-20220718014905-4043-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220718014905-4043-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220718014905-4043-m03: (2.7804947s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.14s)

                                                
                                    
x
+
TestScheduledStopUnix (103.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220718020116-4043 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220718020116-4043 --memory=2048 --driver=docker : (29.125053119s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220718020116-4043 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220718020116-4043 -n scheduled-stop-20220718020116-4043
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220718020116-4043 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220718020116-4043 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220718020116-4043 -n scheduled-stop-20220718020116-4043
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220718020116-4043
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220718020116-4043 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220718020116-4043
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220718020116-4043: exit status 7 (118.530688ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220718020116-4043
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220718020116-4043 -n scheduled-stop-20220718020116-4043
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220718020116-4043 -n scheduled-stop-20220718020116-4043: exit status 7 (113.29838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220718020116-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220718020116-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220718020116-4043: (2.419026648s)
--- PASS: TestScheduledStopUnix (103.53s)

                                                
                                    
x
+
TestSkaffold (60.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1470318270 version
skaffold_test.go:63: skaffold version: v1.39.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220718020259-4043 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220718020259-4043 --memory=2600 --driver=docker : (28.682384372s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1470318270 run --minikube-profile skaffold-20220718020259-4043 --kube-context skaffold-20220718020259-4043 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1470318270 run --minikube-profile skaffold-20220718020259-4043 --kube-context skaffold-20220718020259-4043 --status-check=true --port-forward=false --interactive=false: (17.66336329s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-6769bb9fd4-pqblv" [eb1e4841-c4ce-47e7-9b8b-b4427d46efd8] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014633968s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5d8d6bd4f6-9gxfd" [c15bf693-7a38-44dc-82e0-b249e3f4fcb9] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005971042s
helpers_test.go:175: Cleaning up "skaffold-20220718020259-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220718020259-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220718020259-4043: (3.067723761s)
--- PASS: TestSkaffold (60.71s)

                                                
                                    
x
+
TestInsufficientStorage (13.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220718020400-4043 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220718020400-4043 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.711518316s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4eed7f85-823c-481d-85e1-edf7c7625ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220718020400-4043] minikube v1.26.0 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5776e095-b587-47f3-a6aa-2fe7a196e137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14606"}}
	{"specversion":"1.0","id":"9bd16c8a-a3f0-4742-bd37-8e676d1ea905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig"}}
	{"specversion":"1.0","id":"158ae5c3-a44d-40f8-b8ba-d13be5324d7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"379caba9-873d-437f-b031-34502d4528df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7ee80cbc-8a01-445a-acb8-9fa52846d7e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube"}}
	{"specversion":"1.0","id":"bc089010-2c8a-481e-ac85-030da6d8fcc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c7b18308-a7ac-4621-95b0-525f753a503a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1a536f4c-bd78-48e0-83a3-7b19c77dc994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fc8ac1b-d447-4704-8572-acc4f6991ac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ad24fc37-34c8-402c-a935-de51a8a52fa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220718020400-4043 in cluster insufficient-storage-20220718020400-4043","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd3cdb3a-9617-4a09-95af-e0191204c8f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bdf0125-6caf-433e-aa01-cdb1f3ca02b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"523ad15b-acde-4b68-b1e4-1049b3b96df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220718020400-4043 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220718020400-4043 --output=json --layout=cluster: exit status 7 (424.754342ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220718020400-4043","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220718020400-4043","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 02:04:10.781268   13640 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220718020400-4043" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220718020400-4043 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220718020400-4043 --output=json --layout=cluster: exit status 7 (422.195806ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220718020400-4043","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220718020400-4043","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 02:04:11.204143   13650 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220718020400-4043" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	E0718 02:04:11.212340   13650 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/insufficient-storage-20220718020400-4043/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220718020400-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220718020400-4043
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220718020400-4043: (2.493276127s)
--- PASS: TestInsufficientStorage (13.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220718020841-4043 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (368.324192ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220718020841-4043] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14606
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220718020841-4043 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220718020841-4043 "sudo systemctl is-active --quiet service kubelet": exit status 85 (114.409013ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220718020841-4043 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220718020841-4043 "sudo systemctl is-active --quiet service kubelet": exit status 85 (168.530479ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20220718020841-4043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20220718020841-4043"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.83s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E0718 02:08:47.728112    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:47.890241    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:48.210385    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:48.852515    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
E0718 02:08:50.132973    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.83s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
E0718 02:08:57.819915    4043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/.minikube/profiles/skaffold-20220718020259-4043/client.crt: no such file or directory
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14606
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14606-2866-584c9efc3417eaa1e4c58e683eaf61fb634889e6/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3009296477/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3009296477/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3009296477/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3009296477/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (18/245)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 13.339999ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-qwwfr" [f5cc9df5-b7c0-48a8-b3db-42637f6f30d0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008389476s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-q47jf" [13ac0af5-442c-4011-be2b-5661ae565610] Running
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010115286s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220718012728-4043 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) Done: kubectl --context addons-20220718012728-4043 delete po -l run=registry-test --now: (3.04555688s)
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220718012728-4043 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220718012728-4043 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.83220679s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (20.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220718012728-4043 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220718012728-4043 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220718012728-4043 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [f7010f03-70b2-4b22-9df0-23d1a268f999] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [f7010f03-70b2-4b22-9df0-23d1a268f999] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00708001s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220718012728-4043 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.85s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220718013239-4043 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220718013239-4043 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-5vpm9" [f93752ce-aac7-42be-969a-b627376fa041] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-5vpm9" [f93752ce-aac7-42be-969a-b627376fa041] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.01075408s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220718020413-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220718020413-4043
--- SKIP: TestNetworkPlugins/group/flannel (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220718020414-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220718020414-4043
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.56s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220718021019-4043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220718021019-4043
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard