Test Report: Docker_macOS 12739

                    
                      24e369002aeb518840e093d9fb528e6077bdad6e:2021-11-17:21393
                    
                

Test fail (85/245)

Order failed test Duration
4 TestDownloadOnly/v1.14.0/preload-exists 0.18
39 TestCertOptions 1.59
40 TestCertExpiration 181.77
41 TestDockerFlags 1.56
42 TestForceSystemdFlag 1.51
43 TestForceSystemdEnv 2.17
64 TestFunctional/serial/CacheCmd/cache/add_remote 0.3
66 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.1
67 TestFunctional/serial/CacheCmd/cache/list 0.07
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.72
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
70 TestFunctional/serial/CacheCmd/cache/delete 0.2
73 TestFunctional/serial/ExtraConfig 31.17
74 TestFunctional/serial/ComponentHealth 13.26
206 TestRunningBinaryUpgrade 194.29
208 TestKubernetesUpgrade 110.04
222 TestStoppedBinaryUpgrade/Upgrade 232.1
223 TestStoppedBinaryUpgrade/MinikubeLogs 0.5
232 TestPause/serial/Start 0.66
233 TestPause/serial/SecondStartNoReconfiguration 0.67
234 TestPause/serial/Pause 0.51
235 TestPause/serial/VerifyStatus 0.25
236 TestPause/serial/Unpause 0.52
237 TestPause/serial/PauseAgain 0.52
239 TestPause/serial/VerifyDeletedResources 1.14
241 TestNoKubernetes/serial/Start 0.63
243 TestNoKubernetes/serial/ProfileList 0.51
244 TestNoKubernetes/serial/Stop 0.3
245 TestNoKubernetes/serial/StartNoArgs 0.69
249 TestNetworkPlugins/group/auto/Start 0.43
250 TestNetworkPlugins/group/false/Start 0.45
251 TestNetworkPlugins/group/cilium/Start 0.45
252 TestNetworkPlugins/group/calico/Start 0.43
253 TestNetworkPlugins/group/custom-weave/Start 0.43
254 TestNetworkPlugins/group/enable-default-cni/Start 0.41
255 TestNetworkPlugins/group/kindnet/Start 0.45
256 TestNetworkPlugins/group/bridge/Start 0.44
257 TestNetworkPlugins/group/kubenet/Start 0.45
259 TestStartStop/group/old-k8s-version/serial/FirstStart 0.66
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.36
262 TestStartStop/group/old-k8s-version/serial/Stop 0.31
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.42
264 TestStartStop/group/old-k8s-version/serial/SecondStart 0.64
265 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.21
266 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.25
267 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
268 TestStartStop/group/old-k8s-version/serial/Pause 0.51
270 TestStartStop/group/no-preload/serial/FirstStart 0.67
271 TestStartStop/group/no-preload/serial/DeployApp 0.53
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.35
273 TestStartStop/group/no-preload/serial/Stop 0.3
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.4
275 TestStartStop/group/no-preload/serial/SecondStart 0.64
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.21
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.25
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
279 TestStartStop/group/no-preload/serial/Pause 0.51
281 TestStartStop/group/embed-certs/serial/FirstStart 0.62
282 TestStartStop/group/embed-certs/serial/DeployApp 0.46
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.34
284 TestStartStop/group/embed-certs/serial/Stop 0.3
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
286 TestStartStop/group/embed-certs/serial/SecondStart 0.64
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.21
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.25
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
290 TestStartStop/group/embed-certs/serial/Pause 0.53
292 TestStartStop/group/default-k8s-different-port/serial/FirstStart 0.64
293 TestStartStop/group/default-k8s-different-port/serial/DeployApp 0.46
294 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.35
295 TestStartStop/group/default-k8s-different-port/serial/Stop 0.31
296 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.41
297 TestStartStop/group/default-k8s-different-port/serial/SecondStart 0.64
298 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 0.21
299 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 0.25
300 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.3
301 TestStartStop/group/default-k8s-different-port/serial/Pause 0.51
303 TestStartStop/group/newest-cni/serial/FirstStart 0.62
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.32
306 TestStartStop/group/newest-cni/serial/Stop 0.3
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
308 TestStartStop/group/newest-cni/serial/SecondStart 0.65
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
312 TestStartStop/group/newest-cni/serial/Pause 0.51
x
+
TestDownloadOnly/v1.14.0/preload-exists (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
aaa_download_only_test.go:105: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.14.0/preload-exists (0.18s)

                                                
                                    
x
+
TestCertOptions (1.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20211117171058-31976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-20211117171058-31976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: exit status 69 (407.00252ms)

                                                
                                                
-- stdout --
	* [cert-options-20211117171058-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:52: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-20211117171058-31976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost" : exit status 69
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20211117171058-31976 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-20211117171058-31976 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 85 (94.144614ms)

                                                
                                                
-- stdout --
	* Profile "cert-options-20211117171058-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20211117171058-31976"

                                                
                                                
-- /stdout --
cert_options_test.go:63: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-20211117171058-31976 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 85
cert_options_test.go:70: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:70: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:70: apiserver cert does not include localhost in SAN.
cert_options_test.go:70: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:83: failed to inspect container for the port get port 8555 for "cert-options-20211117171058-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20211117171058-31976: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error response from daemon: Bad response from Docker engine
cert_options_test.go:86: expected to get a non-zero forwarded port but got 0
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20211117171058-31976 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-20211117171058-31976 -- "sudo cat /etc/kubernetes/admin.conf": exit status 85 (94.71636ms)

                                                
                                                
-- stdout --
	* Profile "cert-options-20211117171058-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20211117171058-31976"

                                                
                                                
-- /stdout --
cert_options_test.go:103: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-20211117171058-31976 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 85
cert_options_test.go:107: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	* Profile "cert-options-20211117171058-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20211117171058-31976"

                                                
                                                
-- /stdout --
cert_options_test.go:110: *** TestCertOptions FAILED at 2021-11-17 17:10:59.016564 -0800 PST m=+3630.811482287
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20211117171058-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-options-20211117171058-31976: exit status 1 (114.176021ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20211117171058-31976 -n cert-options-20211117171058-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20211117171058-31976 -n cert-options-20211117171058-31976: exit status 85 (93.65986ms)

                                                
                                                
-- stdout --
	* Profile "cert-options-20211117171058-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-options-20211117171058-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "cert-options-20211117171058-31976" host is not running, skipping log retrieval (state="* Profile \"cert-options-20211117171058-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p cert-options-20211117171058-31976\"")
helpers_test.go:175: Cleaning up "cert-options-20211117171058-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20211117171058-31976
--- FAIL: TestCertOptions (1.59s)

                                                
                                    
x
+
TestCertExpiration (181.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211117171050-31976 --memory=2048 --cert-expiration=3m --driver=docker 
* Downloading driver docker-machine-driver-hyperkit:
cert_options_test.go:124: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20211117171050-31976 --memory=2048 --cert-expiration=3m --driver=docker : exit status 69 (454.467578ms)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117171050-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:126: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-20211117171050-31976 --memory=2048 --cert-expiration=3m --driver=docker " : exit status 69
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1498606143/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1498606143/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1498606143/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211117171050-31976 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:132: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20211117171050-31976 --memory=2048 --cert-expiration=8760h --driver=docker : exit status 69 (431.564505ms)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117171050-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:134: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-20211117171050-31976 --memory=2048 --cert-expiration=8760h --driver=docker " : exit status 69
cert_options_test.go:137: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20211117171050-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
cert_options_test.go:139: *** TestCertExpiration FAILED at 2021-11-17 17:13:51.808904 -0800 PST m=+3803.599616588
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20211117171050-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-expiration-20211117171050-31976: exit status 1 (119.422176ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20211117171050-31976 -n cert-expiration-20211117171050-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20211117171050-31976 -n cert-expiration-20211117171050-31976: exit status 85 (95.477949ms)

                                                
                                                
-- stdout --
	* Profile "cert-expiration-20211117171050-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p cert-expiration-20211117171050-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "cert-expiration-20211117171050-31976" host is not running, skipping log retrieval (state="* Profile \"cert-expiration-20211117171050-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p cert-expiration-20211117171050-31976\"")
helpers_test.go:175: Cleaning up "cert-expiration-20211117171050-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20211117171050-31976
--- FAIL: TestCertExpiration (181.77s)

                                                
                                    
x
+
TestDockerFlags (1.56s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20211117171056-31976 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:46: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-20211117171056-31976 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 69 (433.006961ms)

                                                
                                                
-- stdout --
	* [docker-flags-20211117171056-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:56.782673   43987 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:56.782801   43987 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:56.782806   43987 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:56.782809   43987 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:56.782883   43987 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:56.783191   43987 out.go:304] Setting JSON to false
	I1117 17:10:56.808353   43987 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11431,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:10:56.808448   43987 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:10:56.835635   43987 out.go:176] * [docker-flags-20211117171056-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:10:56.835894   43987 notify.go:174] Checking for updates...
	I1117 17:10:56.884388   43987 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:10:56.910377   43987 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:10:56.936140   43987 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:10:56.962110   43987 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:10:56.962362   43987 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:10:57.052387   43987 docker.go:108] docker version returned error: exit status 1
	I1117 17:10:57.079303   43987 out.go:176] * Using the docker driver based on user configuration
	I1117 17:10:57.079361   43987 start.go:280] selected driver: docker
	I1117 17:10:57.079378   43987 start.go:775] validating driver "docker" against <nil>
	I1117 17:10:57.079404   43987 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:10:57.128044   43987 out.go:176] 
	W1117 17:10:57.128243   43987 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:10:57.128309   43987 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:10:57.154080   43987 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:48: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-20211117171056-31976 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 69
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 85 (95.205902ms)

                                                
                                                
-- stdout --
	* Profile "docker-flags-20211117171056-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p docker-flags-20211117171056-31976"

                                                
                                                
-- /stdout --
docker_test.go:53: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 85
docker_test.go:58: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* Profile \"docker-flags-20211117171056-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20211117171056-31976\"\n"*.
docker_test.go:58: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* Profile \"docker-flags-20211117171056-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20211117171056-31976\"\n"*.
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:62: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 85 (93.72835ms)

                                                
                                                
-- stdout --
	* Profile "docker-flags-20211117171056-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p docker-flags-20211117171056-31976"

                                                
                                                
-- /stdout --
docker_test.go:64: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 85
docker_test.go:68: expected "out/minikube-darwin-amd64 -p docker-flags-20211117171056-31976 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* Profile \"docker-flags-20211117171056-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20211117171056-31976\"\n"
panic.go:642: *** TestDockerFlags FAILED at 2021-11-17 17:10:57.366257 -0800 PST m=+3629.161215269
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20211117171056-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect docker-flags-20211117171056-31976: exit status 1 (115.539666ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20211117171056-31976 -n docker-flags-20211117171056-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20211117171056-31976 -n docker-flags-20211117171056-31976: exit status 85 (94.567169ms)

                                                
                                                
-- stdout --
	* Profile "docker-flags-20211117171056-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p docker-flags-20211117171056-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "docker-flags-20211117171056-31976" host is not running, skipping log retrieval (state="* Profile \"docker-flags-20211117171056-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p docker-flags-20211117171056-31976\"")
helpers_test.go:175: Cleaning up "docker-flags-20211117171056-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20211117171056-31976
--- FAIL: TestDockerFlags (1.56s)

                                                
                                    
x
+
TestForceSystemdFlag (1.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20211117171036-31976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-20211117171036-31976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 69 (408.594741ms)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20211117171036-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:36.603581   43838 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:36.603723   43838 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:36.603728   43838 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:36.603731   43838 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:36.603807   43838 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:36.604110   43838 out.go:304] Setting JSON to false
	I1117 17:10:36.629160   43838 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11411,"bootTime":1637186425,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:10:36.629247   43838 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:10:36.656664   43838 out.go:176] * [force-systemd-flag-20211117171036-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:10:36.656854   43838 notify.go:174] Checking for updates...
	I1117 17:10:36.682940   43838 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:10:36.709455   43838 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:10:36.735835   43838 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:10:36.762150   43838 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:10:36.762585   43838 config.go:176] Loaded profile config "running-upgrade-20211117170725-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 17:10:36.762616   43838 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:10:36.847683   43838 docker.go:108] docker version returned error: exit status 1
	I1117 17:10:36.874749   43838 out.go:176] * Using the docker driver based on user configuration
	I1117 17:10:36.874834   43838 start.go:280] selected driver: docker
	I1117 17:10:36.874850   43838 start.go:775] validating driver "docker" against <nil>
	I1117 17:10:36.874874   43838 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:10:36.923267   43838 out.go:176] 
	W1117 17:10:36.923457   43838 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:10:36.923541   43838 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:10:36.949439   43838 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:88: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-20211117171036-31976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 69
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20211117171036-31976 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-20211117171036-31976 ssh "docker info --format {{.CgroupDriver}}": exit status 85 (94.446069ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-flag-20211117171036-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-flag-20211117171036-31976"

                                                
                                                
-- /stdout --
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-20211117171036-31976 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 85
docker_test.go:101: *** TestForceSystemdFlag FAILED at 2021-11-17 17:10:37.066465 -0800 PST m=+3608.861917386
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20211117171036-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-flag-20211117171036-31976: exit status 1 (115.719145ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20211117171036-31976 -n force-systemd-flag-20211117171036-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20211117171036-31976 -n force-systemd-flag-20211117171036-31976: exit status 85 (94.122078ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-flag-20211117171036-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-flag-20211117171036-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "force-systemd-flag-20211117171036-31976" host is not running, skipping log retrieval (state="* Profile \"force-systemd-flag-20211117171036-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p force-systemd-flag-20211117171036-31976\"")
helpers_test.go:175: Cleaning up "force-systemd-flag-20211117171036-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20211117171036-31976
--- FAIL: TestForceSystemdFlag (1.51s)

                                                
                                    
x
+
TestForceSystemdEnv (2.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20211117171048-31976 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:151: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-20211117171048-31976 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 69 (738.149967ms)

                                                
                                                
-- stdout --
	* [force-systemd-env-20211117171048-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:48.788112   43939 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:48.788350   43939 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:48.788354   43939 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:48.788358   43939 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:48.788443   43939 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:48.788777   43939 out.go:304] Setting JSON to false
	I1117 17:10:48.858069   43939 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11423,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:10:48.858177   43939 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:10:48.898935   43939 out.go:176] * [force-systemd-env-20211117171048-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:10:48.899128   43939 notify.go:174] Checking for updates...
	I1117 17:10:48.945939   43939 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:10:49.013207   43939 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:10:49.073223   43939 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:10:49.148059   43939 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:10:49.188712   43939 out.go:176]   - MINIKUBE_FORCE_SYSTEMD=true
	I1117 17:10:49.189113   43939 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:10:49.304399   43939 docker.go:108] docker version returned error: exit status 1
	I1117 17:10:49.394221   43939 out.go:176] * Using the docker driver based on user configuration
	I1117 17:10:49.394268   43939 start.go:280] selected driver: docker
	I1117 17:10:49.394287   43939 start.go:775] validating driver "docker" against <nil>
	I1117 17:10:49.394327   43939 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:10:49.433197   43939 out.go:176] 
	W1117 17:10:49.433393   43939 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:10:49.433474   43939 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:10:49.460167   43939 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:153: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-20211117171048-31976 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 69
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20211117171048-31976 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-20211117171048-31976 ssh "docker info --format {{.CgroupDriver}}": exit status 85 (182.359099ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-env-20211117171048-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-env-20211117171048-31976"

                                                
                                                
-- /stdout --
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-20211117171048-31976 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 85
docker_test.go:162: *** TestForceSystemdEnv FAILED at 2021-11-17 17:10:49.665279 -0800 PST m=+3621.460425252
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20211117171048-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-env-20211117171048-31976: exit status 1 (170.829372ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20211117171048-31976 -n force-systemd-env-20211117171048-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20211117171048-31976 -n force-systemd-env-20211117171048-31976: exit status 85 (146.879084ms)

                                                
                                                
-- stdout --
	* Profile "force-systemd-env-20211117171048-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p force-systemd-env-20211117171048-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "force-systemd-env-20211117171048-31976" host is not running, skipping log retrieval (state="* Profile \"force-systemd-env-20211117171048-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p force-systemd-env-20211117171048-31976\"")
helpers_test.go:175: Cleaning up "force-systemd-env-20211117171048-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20211117171048-31976
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1498606143
* Using the hyperkit driver based on user configuration
--- FAIL: TestForceSystemdEnv (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:3.1
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:3.1: exit status 10 (102.039783ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.1": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.1
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_1ee7f0edc085faba6c5c2cd5567d37f230636116_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.1". args "out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:3.1" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:3.3
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:3.3: exit status 10 (100.698756ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.3": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.3
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_de8128d312e6d2ac89c1c5074cd22b7974c28c2b_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.3". args "out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:3.3" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:latest
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:latest: exit status 10 (101.505248ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_latest": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:latest
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_5aa7605f63066fc2b7f8379478b9def700202ac8_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:latest". args "out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add k8s.gcr.io/pause:latest" err exit status 10
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_remote (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3: exit status 30 (101.476796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.3: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_e17e40910561608ab15e9700ab84b4e1db856f38_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1041: failed to delete image k8s.gcr.io/pause:3.3 from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1047: (dbg) Run:  out/minikube-darwin-amd64 cache list
functional_test.go:1052: expected 'cache list' output to include 'k8s.gcr.io/pause' but got: ******
--- FAIL: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1061: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo crictl images
functional_test.go:1067: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	IMAGE                                     TAG                               IMAGE ID            SIZE
	gcr.io/k8s-minikube/storage-provisioner   v5                                6e38f40d628db       31.5MB
	k8s.gcr.io/coredns/coredns                v1.8.4                            8d147537fb7d1       47.6MB
	k8s.gcr.io/etcd                           3.5.0-0                           0048118155842       295MB
	k8s.gcr.io/kube-apiserver                 v1.22.3                           53224b502ea4d       128MB
	k8s.gcr.io/kube-controller-manager        v1.22.3                           05c905cef780c       122MB
	k8s.gcr.io/kube-proxy                     v1.22.3                           6120bd723dced       104MB
	k8s.gcr.io/kube-scheduler                 v1.22.3                           0aa9c7e31d307       52.7MB
	k8s.gcr.io/pause                          3.5                               ed210e3e4a5ba       683kB
	kubernetesui/dashboard                    v2.3.1                            e1482a24335a6       220MB
	kubernetesui/metrics-scraper              v1.0.7                            7801cfc6d5c07       34.4MB
	minikube-local-cache-test                 functional-20211117161858-31976   5005f91eebf03       30B

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 1 (627.349217ms)

                                                
                                                
-- stdout --
	Error: No such image: k8s.gcr.io/pause:latest

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1087: failed to manually delete image "out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 1
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (619.815773ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache reload
functional_test.go:1100: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1100: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (611.637806ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1102: expected "out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1109: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1: exit status 30 (98.743728ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.1: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_d1b33253e7334db9f364f7cea75d63fe683cad74_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:3.1 from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1": exit status 30
functional_test.go:1109: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest: exit status 30 (99.007379ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_latest: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_d17bcf228b7a032ee268baa189bce7c5c7938c34_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:latest from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete (0.20s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:698: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1117 16:21:36.918341   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
functional_test.go:698: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (26.823423081s)

                                                
                                                
-- stdout --
	* [functional-20211117161858-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117161858-31976 in cluster functional-20211117161858-31976
	* Pulling base image ...
	* Updating the running docker "functional-20211117161858-31976" container ...
	* Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:21:34.733709   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "etcd-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	E1117 16:21:34.738325   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	E1117 16:21:34.742733   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	E1117 16:21:35.029346   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-proxy-wbv29" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	E1117 16:21:35.432302   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20211117161858-31976": an error on the server ("") has prevented the request from succeeding (get nodes functional-20211117161858-31976)
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:700: failed to restart minikube. args "out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:702: restart took 26.823986352s for "functional-20211117161858-31976" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117161858-31976
helpers_test.go:235: (dbg) docker inspect functional-20211117161858-31976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3",
	        "Created": "2021-11-18T00:19:05.626133858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38790,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-11-18T00:19:17.027751041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/hosts",
	        "LogPath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3-json.log",
	        "Name": "/functional-20211117161858-31976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20211117161858-31976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20211117161858-31976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099-init/diff:/var/lib/docker/overlay2/a93dcfb6f3d6cc41c972ba77a74f26c33bb647aae37c056960e88eff1f45318e/diff:/var/lib/docker/overlay2/5bf663a55dc098d601b6dea4d4c10aaec9f068dcf0de0b940d77262bf5e9bdc6/diff:/var/lib/docker/overlay2/042de3e4be800f5293bfc3bc6fc92553d872b01461acd16fa5a146a312df0e28/diff:/var/lib/docker/overlay2/0790f68de366f4e5284d9606e1a26055a65a8ee9c04fd59b5bac02d4016cf450/diff:/var/lib/docker/overlay2/0b2d68653092e419e945cb562f07ca719191e8b17667a18a8f7b4c24ad10ab0e/diff:/var/lib/docker/overlay2/74497acbc2dda9790b519bf52aa865acb9f38e5cef76e7d8a3a4b529a3d9e702/diff:/var/lib/docker/overlay2/ff120bb48ef6e6e06a88f4c5187e25d554152cc97e0ea1fb3555f17a66908154/diff:/var/lib/docker/overlay2/f5db8db950342323f76b38613fda86996d2b4a2aa755297267caa5b7b8981da0/diff:/var/lib/docker/overlay2/6017be4153ffd7b1ab22d79efd97895ca9791c09d7e77b930827f1f338219cbb/diff:/var/lib/docker/overlay2/fd8bd2
db0148ff3cbae056e3d40a5e785a6daa9839d28dce541d1db16c76a910/diff:/var/lib/docker/overlay2/6e6b657f7202e480d424be1b6934196a3f4b88d643e66c8f27823833e0833ba4/diff:/var/lib/docker/overlay2/4129ea43aaaf15e7de040c40f26e1a9a163317620d7ef92d98e4b2467d593034/diff:/var/lib/docker/overlay2/fd07546476691a27dba8ff73e418292264e996c20c06e955a30dbad83de1733c/diff:/var/lib/docker/overlay2/5ad0909670349956719e0f0ea9ddd5ee7e8959f505f470f10ac2520aa8014e97/diff:/var/lib/docker/overlay2/8825a434b266c3c834891f42fa35dc89e993f3c9e395f2f3c4d6e815f0e329af/diff:/var/lib/docker/overlay2/b4eeccd1b8c68a280e0e4d881a805d320f55b0c471529a3313b31b47252b0c47/diff:/var/lib/docker/overlay2/35fd48039713604d0debc8ac2009daf167d289893615387c0ad9287bffa10082/diff:/var/lib/docker/overlay2/494facb3e11d8950ad7593c6354187416d14009c69168858a6dffc25ebfbf84c/diff:/var/lib/docker/overlay2/4f1ce1df10039c93e604a552d53e3e6645d372f9cddab1b12c00f0067ad80ba2/diff:/var/lib/docker/overlay2/659379d7d9913fcb5492ef098d76112aded93cb7ce203354f9fbcee82d5b062c/diff:/var/lib/d
ocker/overlay2/a1c5d5e92d294301fbac809907fba5a0acc107e187b93e52d5afb6bb0bc2eb9d/diff:/var/lib/docker/overlay2/5065eba0fcf1a8cf75076e4a123f1e9f038fbbf1fae3f82e3ea33d1523b60c91/diff:/var/lib/docker/overlay2/594d5b999ebec6822417fd4ba02da0a7cde6c024fdfb474db4ff3a0784d7f735/diff:/var/lib/docker/overlay2/067ddba03cf6c6688f887150cb3d7174e90d66e9a6f356e86cc3a906c4941894/diff:/var/lib/docker/overlay2/6cee93a03c4d65017c1ef9d392ae34d531e8f7abdd809dc26a0a48ede1ff8367/diff:/var/lib/docker/overlay2/d1e8cfbc84975893028d0b859ebb9ca07a8efc1c8ad9abc10fb1e9c7235f53d4/diff:/var/lib/docker/overlay2/4f2c513e3b5d4707a2aed9244d7ef9f6fc2631524cf8225ba0dfa2f8c3e3931c/diff:/var/lib/docker/overlay2/9be7da800f4028bec22556081948d1d22da9bb3be2d63dde017c61c44c0274ea/diff:/var/lib/docker/overlay2/58b13aae5c184fe2071d64f90c7955b5dbbe76d225c3bd9847f03d1ce1ec8664/diff:/var/lib/docker/overlay2/997e96d68467fd54763e75e2d501272ee5d0497b00ab2c5522522f8ba0754f07/diff:/var/lib/docker/overlay2/49be1e8263d191ec5fdfd8cd4138af81d23aaa12d6548a465b255afe5e8
819c3/diff:/var/lib/docker/overlay2/8658b7127dfd599a3897343d297be67b4576b31720472927b3f5f1856059c56d/diff:/var/lib/docker/overlay2/26b53a2f30fe8fc01bfec363d6b00e2f5ab4f48325d1b5f62b5d8f2854dda781/diff:/var/lib/docker/overlay2/9abe9a8e38d4d40dcfa6152a2ae1bf2ed14dfa1579245d8534bea68a1124307b/diff:/var/lib/docker/overlay2/a1d4fbf621974c40a62164636c14f0dbe1ffa8a7bc7b13c1995ed93b1113dbe7/diff:/var/lib/docker/overlay2/aecec25c90cb357ced7bbaa69f3135cb6f7c8765605b7a360cd515642e00de14/diff:/var/lib/docker/overlay2/5c55f200070f6d0617d7f3031ce23a27df90a72720139ef51fac61fe40032625/diff:/var/lib/docker/overlay2/7a473817be0962d3e2ae1f57f32e95115af914c56a786f2d4d15a9dca232cefa/diff:/var/lib/docker/overlay2/3ca997de4525080aca8f86ad0f68f4f26acc4262a80846cfc96b3d4af8dd2526/diff:/var/lib/docker/overlay2/ad3ce384b651be2a1810da477a29e598be710b6e40f940a3bb3a4a9ed7ee048d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-20211117161858-31976",
	                "Source": "/var/lib/docker/volumes/functional-20211117161858-31976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20211117161858-31976",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20211117161858-31976",
	                "name.minikube.sigs.k8s.io": "functional-20211117161858-31976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "076ac0afe3795f81804d3da602b3a69ab38e221a0122b3f448d185a1ba341295",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52137"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52138"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52140"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52141"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/076ac0afe379",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20211117161858-31976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d46e9a51de65",
	                        "functional-20211117161858-31976"
	                    ],
	                    "NetworkID": "716130e30ae38c1d22de4c84857f1a2addaf8dd4e40f1651d386549d11497de6",
	                    "EndpointID": "dc6711626c432c360f8407c17df6ecba81d669b643cd6e499253a6205087f92f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117161858-31976 -n functional-20211117161858-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117161858-31976 -n functional-20211117161858-31976: exit status 2 (641.631979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs -n 25: (2.917904273s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                          Args                                          |             Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p addons-20211117161126-31976                                                         | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:16:48 PST | Wed, 17 Nov 2021 16:17:06 PST |
	| addons  | enable dashboard -p                                                                    | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:06 PST | Wed, 17 Nov 2021 16:17:06 PST |
	|         | addons-20211117161126-31976                                                            |                                 |         |         |                               |                               |
	| addons  | disable dashboard -p                                                                   | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:06 PST | Wed, 17 Nov 2021 16:17:06 PST |
	|         | addons-20211117161126-31976                                                            |                                 |         |         |                               |                               |
	| delete  | -p addons-20211117161126-31976                                                         | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:06 PST | Wed, 17 Nov 2021 16:17:14 PST |
	| start   | -p nospam-20211117161714-31976 -n=1 --memory=2250 --wait=false                         | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:14 PST | Wed, 17 Nov 2021 16:18:24 PST |
	|         | --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 |                                 |         |         |                               |                               |
	|         | --driver=docker                                                                        |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:29 PST | Wed, 17 Nov 2021 16:18:29 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | pause                                                                                  |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:30 PST | Wed, 17 Nov 2021 16:18:30 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | pause                                                                                  |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:30 PST | Wed, 17 Nov 2021 16:18:31 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | pause                                                                                  |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:31 PST | Wed, 17 Nov 2021 16:18:32 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | unpause                                                                                |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:32 PST | Wed, 17 Nov 2021 16:18:32 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | unpause                                                                                |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:32 PST | Wed, 17 Nov 2021 16:18:33 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | unpause                                                                                |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:33 PST | Wed, 17 Nov 2021 16:18:50 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | stop                                                                                   |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:51 PST | Wed, 17 Nov 2021 16:18:51 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | stop                                                                                   |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:51 PST | Wed, 17 Nov 2021 16:18:51 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | stop                                                                                   |                                 |         |         |                               |                               |
	| delete  | -p nospam-20211117161714-31976                                                         | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:51 PST | Wed, 17 Nov 2021 16:18:58 PST |
	| start   | -p                                                                                     | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:58 PST | Wed, 17 Nov 2021 16:21:01 PST |
	|         | functional-20211117161858-31976                                                        |                                 |         |         |                               |                               |
	|         | --memory=4000                                                                          |                                 |         |         |                               |                               |
	|         | --apiserver-port=8441                                                                  |                                 |         |         |                               |                               |
	|         | --wait=all --driver=docker                                                             |                                 |         |         |                               |                               |
	| start   | -p                                                                                     | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:01 PST | Wed, 17 Nov 2021 16:21:09 PST |
	|         | functional-20211117161858-31976                                                        |                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                                 |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976 cache add                                              | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:11 PST | Wed, 17 Nov 2021 16:21:13 PST |
	|         | minikube-local-cache-test:functional-20211117161858-31976                              |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976 cache delete                                           | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:13 PST | Wed, 17 Nov 2021 16:21:13 PST |
	|         | minikube-local-cache-test:functional-20211117161858-31976                              |                                 |         |         |                               |                               |
	| cache   | list                                                                                   | minikube                        | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:13 PST | Wed, 17 Nov 2021 16:21:13 PST |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:13 PST | Wed, 17 Nov 2021 16:21:14 PST |
	|         | ssh sudo crictl images                                                                 |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:15 PST | Wed, 17 Nov 2021 16:21:15 PST |
	|         | cache reload                                                                           |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:17 PST | Wed, 17 Nov 2021 16:21:19 PST |
	|         | logs -n 25                                                                             |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:20 PST | Wed, 17 Nov 2021 16:21:21 PST |
	|         | kubectl -- --context                                                                   |                                 |         |         |                               |                               |
	|         | functional-20211117161858-31976                                                        |                                 |         |         |                               |                               |
	|         | get pods                                                                               |                                 |         |         |                               |                               |
	| kubectl | --profile=functional-20211117161858-31976                                              | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:21 PST | Wed, 17 Nov 2021 16:21:21 PST |
	|         | -- --context                                                                           |                                 |         |         |                               |                               |
	|         | functional-20211117161858-31976 get pods                                               |                                 |         |         |                               |                               |
	|---------|----------------------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 16:21:21
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 16:21:21.622885   33790 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:21:21.623009   33790 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:21:21.623011   33790 out.go:310] Setting ErrFile to fd 2...
	I1117 16:21:21.623013   33790 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:21:21.623084   33790 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 16:21:21.623330   33790 out.go:304] Setting JSON to false
	I1117 16:21:21.649303   33790 start.go:112] hostinfo: {"hostname":"37310.local","uptime":8456,"bootTime":1637186425,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:21:21.649404   33790 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:21:21.676374   33790 out.go:176] * [functional-20211117161858-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 16:21:21.676542   33790 notify.go:174] Checking for updates...
	I1117 16:21:21.702052   33790 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 16:21:21.728239   33790 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 16:21:21.753862   33790 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 16:21:21.779964   33790 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 16:21:21.780393   33790 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:21:21.780436   33790 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 16:21:21.881721   33790 docker.go:132] docker version: linux-20.10.6
	I1117 16:21:21.881850   33790 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:21:22.071842   33790 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-18 00:21:22.004732313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:21:22.120163   33790 out.go:176] * Using the docker driver based on existing profile
	I1117 16:21:22.120221   33790 start.go:280] selected driver: docker
	I1117 16:21:22.120231   33790 start.go:775] validating driver "docker" against &{Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAdd
onRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:21:22.120357   33790 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 16:21:22.120726   33790 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:21:22.310085   33790 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-18 00:21:22.241494137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:21:22.312056   33790 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 16:21:22.312086   33790 cni.go:93] Creating CNI manager for ""
	I1117 16:21:22.312094   33790 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:21:22.312101   33790 start_flags.go:282] config:
	{Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddo
nRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:21:22.360734   33790 out.go:176] * Starting control plane node functional-20211117161858-31976 in cluster functional-20211117161858-31976
	I1117 16:21:22.360819   33790 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 16:21:22.386749   33790 out.go:176] * Pulling base image ...
	I1117 16:21:22.386821   33790 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 16:21:22.386901   33790 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 16:21:22.386931   33790 cache.go:57] Caching tarball of preloaded images
	I1117 16:21:22.386925   33790 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 16:21:22.387177   33790 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 16:21:22.387208   33790 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 16:21:22.387966   33790 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/config.json ...
	I1117 16:21:22.514468   33790 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 16:21:22.514485   33790 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 16:21:22.514497   33790 cache.go:206] Successfully downloaded all kic artifacts
	I1117 16:21:22.514541   33790 start.go:313] acquiring machines lock for functional-20211117161858-31976: {Name:mkf7e5ee0db2d67009702787d2639dd998f1b20a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:21:22.514618   33790 start.go:317] acquired machines lock for "functional-20211117161858-31976" in 62.221µs
	I1117 16:21:22.514641   33790 start.go:93] Skipping create...Using existing machine configuration
	I1117 16:21:22.514648   33790 fix.go:55] fixHost starting: 
	I1117 16:21:22.514907   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:22.635958   33790 fix.go:108] recreateIfNeeded on functional-20211117161858-31976: state=Running err=<nil>
	W1117 16:21:22.635981   33790 fix.go:134] unexpected machine state, will restart: <nil>
	I1117 16:21:22.662731   33790 out.go:176] * Updating the running docker "functional-20211117161858-31976" container ...
	I1117 16:21:22.662759   33790 machine.go:88] provisioning docker machine ...
	I1117 16:21:22.662778   33790 ubuntu.go:169] provisioning hostname "functional-20211117161858-31976"
	I1117 16:21:22.662854   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:22.782815   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:22.782997   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:22.783006   33790 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20211117161858-31976 && echo "functional-20211117161858-31976" | sudo tee /etc/hostname
	I1117 16:21:22.902454   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20211117161858-31976
	
	I1117 16:21:22.902543   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:23.024332   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:23.024486   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:23.024499   33790 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20211117161858-31976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20211117161858-31976/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20211117161858-31976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1117 16:21:23.134580   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:21:23.134623   33790 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube}
	I1117 16:21:23.134651   33790 ubuntu.go:177] setting up certificates
	I1117 16:21:23.134667   33790 provision.go:83] configureAuth start
	I1117 16:21:23.134760   33790 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211117161858-31976
	I1117 16:21:23.262120   33790 provision.go:138] copyHostCerts
	I1117 16:21:23.262204   33790 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem, removing ...
	I1117 16:21:23.262209   33790 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem
	I1117 16:21:23.262305   33790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem (1123 bytes)
	I1117 16:21:23.262507   33790 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem, removing ...
	I1117 16:21:23.262516   33790 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem
	I1117 16:21:23.262571   33790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem (1679 bytes)
	I1117 16:21:23.262708   33790 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem, removing ...
	I1117 16:21:23.262711   33790 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem
	I1117 16:21:23.262777   33790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem (1078 bytes)
	I1117 16:21:23.262901   33790 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem org=jenkins.functional-20211117161858-31976 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20211117161858-31976]
	I1117 16:21:23.387784   33790 provision.go:172] copyRemoteCerts
	I1117 16:21:23.387849   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1117 16:21:23.387901   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:23.508062   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:23.598162   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1117 16:21:23.614485   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1117 16:21:23.631096   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1117 16:21:23.647617   33790 provision.go:86] duration metric: configureAuth took 512.923631ms
	I1117 16:21:23.647626   33790 ubuntu.go:193] setting minikube options for container-runtime
	I1117 16:21:23.647791   33790 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:21:23.647858   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:23.767974   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:23.768127   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:23.768134   33790 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1117 16:21:23.881322   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1117 16:21:23.881336   33790 ubuntu.go:71] root file system type: overlay
	I1117 16:21:23.881520   33790 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1117 16:21:23.881615   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.000288   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:24.000448   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:24.000493   33790 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1117 16:21:24.119293   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1117 16:21:24.119389   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.237960   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:24.238130   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:24.238140   33790 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1117 16:21:24.352811   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:21:24.352825   33790 machine.go:91] provisioned docker machine in 1.690013418s
	I1117 16:21:24.352833   33790 start.go:267] post-start starting for "functional-20211117161858-31976" (driver="docker")
	I1117 16:21:24.352836   33790 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1117 16:21:24.352919   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1117 16:21:24.352978   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.472328   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:24.555003   33790 ssh_runner.go:152] Run: cat /etc/os-release
	I1117 16:21:24.558639   33790 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1117 16:21:24.558652   33790 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1117 16:21:24.558660   33790 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1117 16:21:24.558665   33790 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1117 16:21:24.558672   33790 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/addons for local assets ...
	I1117 16:21:24.558765   33790 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files for local assets ...
	I1117 16:21:24.558951   33790 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem -> 319762.pem in /etc/ssl/certs
	I1117 16:21:24.559095   33790 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/test/nested/copy/31976/hosts -> hosts in /etc/test/nested/copy/31976
	I1117 16:21:24.559142   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/31976
	I1117 16:21:24.566252   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem --> /etc/ssl/certs/319762.pem (1708 bytes)
	I1117 16:21:24.582991   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/test/nested/copy/31976/hosts --> /etc/test/nested/copy/31976/hosts (40 bytes)
	I1117 16:21:24.599708   33790 start.go:270] post-start completed in 246.860649ms
	I1117 16:21:24.599784   33790 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:21:24.599840   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.718037   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:24.797768   33790 fix.go:57] fixHost completed within 2.283044439s
	I1117 16:21:24.797784   33790 start.go:80] releasing machines lock for "functional-20211117161858-31976", held for 2.283095634s
	I1117 16:21:24.797900   33790 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211117161858-31976
	I1117 16:21:24.917591   33790 ssh_runner.go:152] Run: systemctl --version
	I1117 16:21:24.917601   33790 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1117 16:21:24.917658   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.917672   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:25.046625   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:25.046759   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:25.593369   33790 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I1117 16:21:25.602997   33790 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I1117 16:21:25.612754   33790 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I1117 16:21:25.612817   33790 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1117 16:21:25.621861   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 16:21:25.634265   33790 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I1117 16:21:25.711219   33790 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I1117 16:21:25.789866   33790 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I1117 16:21:25.800286   33790 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1117 16:21:25.877362   33790 ssh_runner.go:152] Run: sudo systemctl start docker
	I1117 16:21:25.887091   33790 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I1117 16:21:25.925452   33790 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I1117 16:21:25.992258   33790 out.go:203] * Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
	I1117 16:21:25.992393   33790 cli_runner.go:115] Run: docker exec -t functional-20211117161858-31976 dig +short host.docker.internal
	I1117 16:21:26.182933   33790 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1117 16:21:26.183029   33790 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1117 16:21:26.187212   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:26.350225   33790 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1117 16:21:26.350377   33790 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 16:21:26.350545   33790 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:21:26.382564   33790 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.3
	k8s.gcr.io/kube-scheduler:v1.22.3
	k8s.gcr.io/kube-controller-manager:v1.22.3
	k8s.gcr.io/kube-proxy:v1.22.3
	minikube-local-cache-test:functional-20211117161858-31976
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1117 16:21:26.382573   33790 docker.go:489] Images already preloaded, skipping extraction
	I1117 16:21:26.382653   33790 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:21:26.412928   33790 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.3
	k8s.gcr.io/kube-scheduler:v1.22.3
	k8s.gcr.io/kube-controller-manager:v1.22.3
	k8s.gcr.io/kube-proxy:v1.22.3
	minikube-local-cache-test:functional-20211117161858-31976
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1117 16:21:26.412940   33790 cache_images.go:79] Images are preloaded, skipping loading
	I1117 16:21:26.413025   33790 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I1117 16:21:26.493364   33790 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1117 16:21:26.493384   33790 cni.go:93] Creating CNI manager for ""
	I1117 16:21:26.493390   33790 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:21:26.493393   33790 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1117 16:21:26.493408   33790 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20211117161858-31976 NodeName:functional-20211117161858-31976 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1117 16:21:26.493510   33790 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20211117161858-31976"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1117 16:21:26.493599   33790 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20211117161858-31976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1117 16:21:26.493666   33790 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3
	I1117 16:21:26.501690   33790 binaries.go:44] Found k8s binaries, skipping transfer
	I1117 16:21:26.501744   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1117 16:21:26.508804   33790 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I1117 16:21:26.521182   33790 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1117 16:21:26.533276   33790 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1924 bytes)
	I1117 16:21:26.554949   33790 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1117 16:21:26.559258   33790 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976 for IP: 192.168.49.2
	I1117 16:21:26.559460   33790 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.key
	I1117 16:21:26.559521   33790 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.key
	I1117 16:21:26.559652   33790 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.key
	I1117 16:21:26.559720   33790 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/apiserver.key.dd3b5fb2
	I1117 16:21:26.559771   33790 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/proxy-client.key
	I1117 16:21:26.559974   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/31976.pem (1338 bytes)
	W1117 16:21:26.560019   33790 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/31976_empty.pem, impossibly tiny 0 bytes
	I1117 16:21:26.560033   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem (1679 bytes)
	I1117 16:21:26.560072   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem (1078 bytes)
	I1117 16:21:26.560110   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem (1123 bytes)
	I1117 16:21:26.560155   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem (1679 bytes)
	I1117 16:21:26.560217   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem (1708 bytes)
	I1117 16:21:26.561014   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1117 16:21:26.582083   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1117 16:21:26.599821   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1117 16:21:26.636540   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1117 16:21:26.653473   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1117 16:21:26.670129   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1117 16:21:26.686208   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1117 16:21:26.702853   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1117 16:21:26.719475   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1117 16:21:26.736305   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/31976.pem --> /usr/share/ca-certificates/31976.pem (1338 bytes)
	I1117 16:21:26.753116   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem --> /usr/share/ca-certificates/319762.pem (1708 bytes)
	I1117 16:21:26.770927   33790 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1117 16:21:26.783126   33790 ssh_runner.go:152] Run: openssl version
	I1117 16:21:26.788339   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31976.pem && ln -fs /usr/share/ca-certificates/31976.pem /etc/ssl/certs/31976.pem"
	I1117 16:21:26.795960   33790 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/31976.pem
	I1117 16:21:26.799692   33790 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 18 00:18 /usr/share/ca-certificates/31976.pem
	I1117 16:21:26.799736   33790 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31976.pem
	I1117 16:21:26.804943   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/31976.pem /etc/ssl/certs/51391683.0"
	I1117 16:21:26.812809   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/319762.pem && ln -fs /usr/share/ca-certificates/319762.pem /etc/ssl/certs/319762.pem"
	I1117 16:21:26.820966   33790 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/319762.pem
	I1117 16:21:26.825189   33790 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 18 00:18 /usr/share/ca-certificates/319762.pem
	I1117 16:21:26.825239   33790 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/319762.pem
	I1117 16:21:26.830525   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/319762.pem /etc/ssl/certs/3ec20f2e.0"
	I1117 16:21:26.838070   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1117 16:21:26.846059   33790 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:21:26.850266   33790 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 18 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:21:26.850311   33790 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:21:26.855877   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1117 16:21:26.863048   33790 kubeadm.go:390] StartCluster: {Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:fal
se volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:21:26.863171   33790 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1117 16:21:26.891230   33790 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1117 16:21:26.899485   33790 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1117 16:21:26.899495   33790 kubeadm.go:600] restartCluster start
	I1117 16:21:26.899552   33790 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1117 16:21:26.906674   33790 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1117 16:21:26.906751   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:27.028837   33790 kubeconfig.go:92] found "functional-20211117161858-31976" server: "https://127.0.0.1:52141"
	I1117 16:21:27.032037   33790 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1117 16:21:27.040014   33790 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-11-18 00:19:49.283585838 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-11-18 00:21:26.562255672 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1117 16:21:27.040021   33790 kubeadm.go:1032] stopping kube-system containers ...
	I1117 16:21:27.040101   33790 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1117 16:21:27.071071   33790 docker.go:390] Stopping containers: [ea850ca3cc61 d0d3e0dca5e9 41b6ab69a384 0e35560df59b e4efee81ca46 26ff204bef53 7d514020b6b7 9b2f0c8b2294 57f34cc17d7a acade1ad68d7 5c6c4dc475e0 9bad689182de 72ff4c2179fb e30cdc66639d c595f5f22c80]
	I1117 16:21:27.071168   33790 ssh_runner.go:152] Run: docker stop ea850ca3cc61 d0d3e0dca5e9 41b6ab69a384 0e35560df59b e4efee81ca46 26ff204bef53 7d514020b6b7 9b2f0c8b2294 57f34cc17d7a acade1ad68d7 5c6c4dc475e0 9bad689182de 72ff4c2179fb e30cdc66639d c595f5f22c80
	I1117 16:21:32.228295   33790 ssh_runner.go:192] Completed: docker stop ea850ca3cc61 d0d3e0dca5e9 41b6ab69a384 0e35560df59b e4efee81ca46 26ff204bef53 7d514020b6b7 9b2f0c8b2294 57f34cc17d7a acade1ad68d7 5c6c4dc475e0 9bad689182de 72ff4c2179fb e30cdc66639d c595f5f22c80: (5.156941798s)
	I1117 16:21:32.228388   33790 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I1117 16:21:32.273365   33790 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1117 16:21:32.281095   33790 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Nov 18 00:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov 18 00:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Nov 18 00:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov 18 00:19 /etc/kubernetes/scheduler.conf
	
	I1117 16:21:32.281148   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1117 16:21:32.288879   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1117 16:21:32.296147   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1117 16:21:32.303562   33790 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1117 16:21:32.303622   33790 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1117 16:21:32.312025   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1117 16:21:32.319391   33790 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1117 16:21:32.319450   33790 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1117 16:21:32.326292   33790 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1117 16:21:32.333970   33790 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1117 16:21:32.333977   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:32.380483   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.540088   33790 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159560405s)
	I1117 16:21:33.540097   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.673891   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.726579   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.795988   33790 api_server.go:51] waiting for apiserver process to appear ...
	I1117 16:21:33.796057   33790 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 16:21:33.812813   33790 api_server.go:71] duration metric: took 16.830404ms to wait for apiserver process to appear ...
	I1117 16:21:33.812822   33790 api_server.go:87] waiting for apiserver healthz status ...
	I1117 16:21:33.812833   33790 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52141/healthz ...
	I1117 16:21:33.819102   33790 api_server.go:266] https://127.0.0.1:52141/healthz returned 200:
	ok
	I1117 16:21:33.826158   33790 api_server.go:140] control plane version: v1.22.3
	I1117 16:21:33.826169   33790 api_server.go:130] duration metric: took 13.343873ms to wait for apiserver health ...
	I1117 16:21:33.826174   33790 cni.go:93] Creating CNI manager for ""
	I1117 16:21:33.826178   33790 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:21:33.826185   33790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1117 16:21:33.836317   33790 system_pods.go:59] 7 kube-system pods found
	I1117 16:21:33.836331   33790 system_pods.go:61] "coredns-78fcd69978-dnq6x" [67a186a3-f954-4960-bb9e-57d18527dbc7] Running
	I1117 16:21:33.836337   33790 system_pods.go:61] "etcd-functional-20211117161858-31976" [0591e3ee-239f-402b-a882-d725460bb901] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1117 16:21:33.836340   33790 system_pods.go:61] "kube-apiserver-functional-20211117161858-31976" [e7056857-2ed9-4d76-8caa-79910d7b601e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1117 16:21:33.836348   33790 system_pods.go:61] "kube-controller-manager-functional-20211117161858-31976" [7fd136e2-3546-4ed5-a212-51641f6cb3d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1117 16:21:33.836351   33790 system_pods.go:61] "kube-proxy-wbv29" [453e3252-c5a8-48b4-893b-0496a8ed4dec] Running
	I1117 16:21:33.836353   33790 system_pods.go:61] "kube-scheduler-functional-20211117161858-31976" [709bdcfd-0135-490f-9d56-c5ad014aab58] Running
	I1117 16:21:33.836358   33790 system_pods.go:61] "storage-provisioner" [9b9ccef0-fccc-43f7-8dec-952d07564964] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1117 16:21:33.836361   33790 system_pods.go:74] duration metric: took 10.173348ms to wait for pod list to return data ...
	I1117 16:21:33.836365   33790 node_conditions.go:102] verifying NodePressure condition ...
	I1117 16:21:33.840538   33790 node_conditions.go:122] node storage ephemeral capacity is 123591232Ki
	I1117 16:21:33.840552   33790 node_conditions.go:123] node cpu capacity is 6
	I1117 16:21:33.840563   33790 node_conditions.go:105] duration metric: took 4.193776ms to run NodePressure ...
	I1117 16:21:33.840571   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:34.196195   33790 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1117 16:21:34.200872   33790 kubeadm.go:746] kubelet initialised
	I1117 16:21:34.200878   33790 kubeadm.go:747] duration metric: took 4.672789ms waiting for restarted kubelet to initialise ...
	I1117 16:21:34.200885   33790 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1117 16:21:34.206372   33790 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-dnq6x" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.216319   33790 pod_ready.go:92] pod "coredns-78fcd69978-dnq6x" in "kube-system" namespace has status "Ready":"True"
	I1117 16:21:34.216325   33790 pod_ready.go:81] duration metric: took 9.941793ms waiting for pod "coredns-78fcd69978-dnq6x" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.216336   33790 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.733695   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "etcd-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.733705   33790 pod_ready.go:81] duration metric: took 517.348753ms waiting for pod "etcd-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:34.733709   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "etcd-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.733723   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.738314   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.738320   33790 pod_ready.go:81] duration metric: took 4.593288ms waiting for pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:34.738325   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.738332   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.742721   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.742728   33790 pod_ready.go:81] duration metric: took 4.392273ms waiting for pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:34.742733   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.742742   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbv29" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:35.029336   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-proxy-wbv29" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.029342   33790 pod_ready.go:81] duration metric: took 286.584397ms waiting for pod "kube-proxy-wbv29" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:35.029346   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-proxy-wbv29" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.029360   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:35.432286   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.432297   33790 pod_ready.go:81] duration metric: took 402.92083ms waiting for pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:35.432302   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.432314   33790 pod_ready.go:38] duration metric: took 1.231387704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1117 16:21:35.432326   33790 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1117 16:21:35.448332   33790 ops.go:34] apiserver oom_adj: -16
	I1117 16:21:35.448338   33790 kubeadm.go:604] restartCluster took 8.54859145s
	I1117 16:21:35.448342   33790 kubeadm.go:392] StartCluster complete in 8.585052624s
	I1117 16:21:35.448355   33790 settings.go:142] acquiring lock: {Name:mk2452f58907cab9912f4bf05149d18acb236e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:21:35.448438   33790 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 16:21:35.448901   33790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig: {Name:mk7b88dea9e1cf642f59443febe00ec01446b401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:21:35.454603   33790 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20211117161858-31976" rescaled to 1
	I1117 16:21:35.454631   33790 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 16:21:35.454642   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1117 16:21:35.454677   33790 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1117 16:21:35.482280   33790 out.go:176] * Verifying Kubernetes components...
	I1117 16:21:35.454823   33790 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:21:35.482339   33790 addons.go:65] Setting default-storageclass=true in profile "functional-20211117161858-31976"
	I1117 16:21:35.482342   33790 addons.go:65] Setting storage-provisioner=true in profile "functional-20211117161858-31976"
	I1117 16:21:35.482355   33790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20211117161858-31976"
	I1117 16:21:35.482360   33790 addons.go:153] Setting addon storage-provisioner=true in "functional-20211117161858-31976"
	I1117 16:21:35.482365   33790 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	W1117 16:21:35.482365   33790 addons.go:165] addon storage-provisioner should already be in state true
	I1117 16:21:35.482397   33790 host.go:66] Checking if "functional-20211117161858-31976" exists ...
	I1117 16:21:35.482835   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:35.503648   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:35.517807   33790 start.go:719] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1117 16:21:35.517901   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:35.651758   33790 addons.go:153] Setting addon default-storageclass=true in "functional-20211117161858-31976"
	W1117 16:21:35.651773   33790 addons.go:165] addon default-storageclass should already be in state true
	I1117 16:21:35.651786   33790 host.go:66] Checking if "functional-20211117161858-31976" exists ...
	I1117 16:21:35.652195   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:35.688652   33790 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 16:21:35.688763   33790 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 16:21:35.688779   33790 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1117 16:21:35.688858   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:35.694743   33790 node_ready.go:35] waiting up to 6m0s for node "functional-20211117161858-31976" to be "Ready" ...
	I1117 16:21:35.793431   33790 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1117 16:21:35.793440   33790 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1117 16:21:35.793530   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:35.827874   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:35.921703   33790 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 16:21:35.934689   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:36.037457   33790 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1117 16:21:36.223180   33790 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1117 16:21:36.223193   33790 addons.go:417] enableAddons completed in 768.519889ms
	I1117 16:21:37.704232   33790 node_ready.go:58] node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:48.248672   33790 node_ready.go:53] error getting node "functional-20211117161858-31976": an error on the server ("") has prevented the request from succeeding (get nodes functional-20211117161858-31976)
	I1117 16:21:48.248680   33790 node_ready.go:38] duration metric: took 12.55354997s waiting for node "functional-20211117161858-31976" to be "Ready" ...
	I1117 16:21:48.274481   33790 out.go:176] 
	W1117 16:21:48.274560   33790 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20211117161858-31976": an error on the server ("") has prevented the request from succeeding (get nodes functional-20211117161858-31976)
	W1117 16:21:48.274567   33790 out.go:241] * 
	W1117 16:21:48.275136   33790 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2021-11-18 00:19:19 UTC, end at Thu 2021-11-18 00:21:50 UTC. --
	Nov 18 00:19:46 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:19:46.215832592Z" level=info msg="Daemon has completed initialization"
	Nov 18 00:19:46 functional-20211117161858-31976 systemd[1]: Started Docker Application Container Engine.
	Nov 18 00:19:46 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:19:46.238854763Z" level=info msg="API listen on [::]:2376"
	Nov 18 00:19:46 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:19:46.241903110Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 18 00:20:31 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:20:31.951567126Z" level=info msg="ignoring event" container=297f7a37e8bd7e92a66de64cccc099099267fca4f04ebdb80560c3fcd6134577 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:20:31 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:20:31.995327419Z" level=info msg="ignoring event" container=1a443d1f744d39173e81d045ed1ce442d19760861b9b4307dc4c605d2085949d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:20:52 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:20:52.253052227Z" level=info msg="ignoring event" container=d0d3e0dca5e9300eb2fb46ca47b9bcb6ec63a0aad06b59cb5d507a3a4e10680c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.295680499Z" level=info msg="ignoring event" container=41b6ab69a384cb2e2ddb1f58cec2f1e85d3f1eb993943f1b6072c82a98fd06b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.304131363Z" level=info msg="ignoring event" container=e4efee81ca4695061c2b6dbf156d12f268077143c8eb277bd7100d8ef2b2c746 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.317333478Z" level=info msg="ignoring event" container=acade1ad68d757b68fa1e074e64d4bcad38eb847718506af25d18d956f2802ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.317377663Z" level=info msg="ignoring event" container=7d514020b6b739c237d0dfbeed599f8d20fe29b82c0c692c1fd395c603db5d4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.318184856Z" level=info msg="ignoring event" container=9bad689182deb3a3a60cf2e56ad1100208a956315916a74e9011474dce7d993e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.328384627Z" level=info msg="ignoring event" container=72ff4c2179fba02d43ccc433933719e2c2c383ce806bfdeb133a5c8c986a42f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.400303630Z" level=info msg="ignoring event" container=ea850ca3cc61f91a1d9499307e0c2772fec37eaae99f198afab1abd75c47845b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.401539547Z" level=info msg="ignoring event" container=c595f5f22c8069dac10251914290d0bd3bffa5e83e842b2b2e5d28f8490bb26f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.404561545Z" level=info msg="ignoring event" container=26ff204bef53fc0dd18974e371dadc54ca7dcdf9d710d3665b61c303bddcee0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.421254142Z" level=info msg="ignoring event" container=e30cdc66639da264af9585cfacae2cd0521fceedefac10b6fcfb5737ba3a87c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.430842555Z" level=info msg="ignoring event" container=57f34cc17d7a6aae9f8632a13b459232478f514fe0f9238e9f12307474be2aeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:28 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:28.107342437Z" level=info msg="ignoring event" container=5c6c4dc475e0660b8b2222acba60032c7e49bb624a8c38a53e34b1ebba19402e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:28 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:28.118865919Z" level=info msg="ignoring event" container=9b2f0c8b2294b600fc96d434db4d07350341a1416663656abaf0eabe13f7007c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:32 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:32.227573401Z" level=info msg="ignoring event" container=0e35560df59bfc3e72f16462d40b50c0480ce4e80d80028a8c867f8f9cf914ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:36 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:36.535503755Z" level=info msg="ignoring event" container=2ff1bb3b6c9371822d485e55a78074e668fc4461c35828f4d9ea383e3af000a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:37 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:37.813734943Z" level=info msg="ignoring event" container=e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:37 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:37.885685250Z" level=info msg="ignoring event" container=2e79e79749173295007690797003e6e7aadbc13edb87be79e18ab291e2222c62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:37 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:37.910686858Z" level=info msg="ignoring event" container=622e4ca9307f2db3d4eff6dbe6895f584fdeac51698c6b276894bf22bcc4af05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e180853e41903       53224b502ea4d       13 seconds ago       Exited              kube-apiserver            1                   f8f1e147723bf
	fbf9821ffb084       8d147537fb7d1       13 seconds ago       Running             coredns                   1                   47dbcdbbcbb52
	422236a5e675b       6e38f40d628db       14 seconds ago       Running             storage-provisioner       2                   250d98e52e33e
	250907264d348       0aa9c7e31d307       22 seconds ago       Running             kube-scheduler            1                   6e098b42d6ec1
	e34b718143e14       05c905cef780c       22 seconds ago       Running             kube-controller-manager   1                   8d617fb091a41
	4a9c75d87f40b       0048118155842       22 seconds ago       Running             etcd                      1                   dbf91b2d10759
	dec4850aecf1d       6120bd723dced       22 seconds ago       Running             kube-proxy                1                   1357ef914aa3f
	ea850ca3cc61f       6e38f40d628db       58 seconds ago       Exited              storage-provisioner       1                   41b6ab69a384c
	0e35560df59bf       8d147537fb7d1       About a minute ago   Exited              coredns                   0                   7d514020b6b73
	e4efee81ca469       6120bd723dced       About a minute ago   Exited              kube-proxy                0                   26ff204bef53f
	9b2f0c8b2294b       0aa9c7e31d307       About a minute ago   Exited              kube-scheduler            0                   9bad689182deb
	57f34cc17d7a6       05c905cef780c       About a minute ago   Exited              kube-controller-manager   0                   72ff4c2179fba
	acade1ad68d75       0048118155842       About a minute ago   Exited              etcd                      0                   c595f5f22c806
	
	* 
	* ==> coredns [0e35560df59b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [fbf9821ffb08] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	W1118 00:21:37.886846       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1118 00:21:37.886923       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1118 00:21:37.886940       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	E1118 00:21:38.740403       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:39.028112       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:39.237619       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:40.592244       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:40.783828       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:41.320049       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:45.185752       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:45.421249       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:46.568514       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.035670] bpfilter: read fail 0
	[  +0.029225] bpfilter: read fail 0
	[  +0.036872] bpfilter: read fail 0
	[  +0.029931] bpfilter: read fail 0
	[  +0.028934] bpfilter: read fail 0
	[  +0.025120] bpfilter: read fail 0
	[  +0.028036] bpfilter: read fail 0
	[  +0.030521] bpfilter: read fail 0
	[  +0.025914] bpfilter: read fail 0
	[  +0.037066] bpfilter: read fail 0
	[  +0.044538] bpfilter: read fail 0
	[  +0.023912] bpfilter: read fail 0
	[  +0.034062] bpfilter: read fail 0
	[  +0.033394] bpfilter: read fail 0
	[  +0.034824] bpfilter: write fail -32
	[  +0.028619] bpfilter: write fail -32
	[  +0.026012] bpfilter: read fail 0
	[  +0.034679] bpfilter: write fail -32
	[  +0.033542] bpfilter: read fail 0
	[  +0.030549] bpfilter: write fail -32
	[  +0.030916] bpfilter: read fail 0
	[  +0.024932] bpfilter: read fail 0
	[  +0.030508] bpfilter: read fail 0
	[  +0.042210] bpfilter: read fail 0
	[  +0.025433] bpfilter: read fail 0
	
	* 
	* ==> etcd [4a9c75d87f40] <==
	* {"level":"info","ts":"2021-11-18T00:21:28.819Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-11-18T00:21:28.819Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-11-18T00:21:28.821Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.213Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211117161858-31976 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-11-18T00:21:29.213Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:21:29.214Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:21:29.215Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-11-18T00:21:29.215Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-11-18T00:21:29.215Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-11-18T00:21:29.226Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [acade1ad68d7] <==
	* {"level":"info","ts":"2021-11-18T00:20:01.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-11-18T00:20:01.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-11-18T00:20:01.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211117161858-31976 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:20:01.468Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-11-18T00:20:01.468Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-11-18T00:20:01.468Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-11-18T00:20:01.469Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-11-18T00:20:01.475Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:20:01.475Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:20:01.475Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:21:27.125Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-11-18T00:21:27.125Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20211117161858-31976","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/11/18 00:21:27 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/11/18 00:21:27 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-11-18T00:21:27.135Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2021-11-18T00:21:27.137Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:27.138Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:27.138Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20211117161858-31976","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  00:21:50 up 11 min,  0 users,  load average: 2.60, 2.24, 1.34
	Linux functional-20211117161858-31976 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e180853e4190] <==
	* I1118 00:21:37.794054       1 server.go:553] external host was not specified, using 192.168.49.2
	I1118 00:21:37.794682       1 server.go:161] Version: v1.22.3
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-controller-manager [57f34cc17d7a] <==
	* I1118 00:20:19.861174       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I1118 00:20:19.864808       1 shared_informer.go:247] Caches are synced for node 
	I1118 00:20:19.864840       1 range_allocator.go:172] Starting range CIDR allocator
	I1118 00:20:19.864843       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I1118 00:20:19.864848       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I1118 00:20:19.868661       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wbv29"
	I1118 00:20:19.873974       1 range_allocator.go:373] Set node functional-20211117161858-31976 PodCIDR to [10.244.0.0/24]
	I1118 00:20:19.885673       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-dnq6x"
	I1118 00:20:19.936687       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1118 00:20:19.938663       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-hk5kk"
	I1118 00:20:20.000246       1 shared_informer.go:247] Caches are synced for disruption 
	I1118 00:20:20.038779       1 disruption.go:371] Sending events to api server.
	I1118 00:20:20.051192       1 shared_informer.go:247] Caches are synced for expand 
	I1118 00:20:20.051310       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1118 00:20:20.051344       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1118 00:20:20.056678       1 shared_informer.go:247] Caches are synced for attach detach 
	I1118 00:20:20.071851       1 shared_informer.go:247] Caches are synced for resource quota 
	I1118 00:20:20.071858       1 shared_informer.go:247] Caches are synced for ephemeral 
	I1118 00:20:20.076620       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I1118 00:20:20.078475       1 shared_informer.go:247] Caches are synced for resource quota 
	I1118 00:20:20.080869       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-hk5kk"
	I1118 00:20:20.102042       1 shared_informer.go:247] Caches are synced for stateful set 
	I1118 00:20:20.488636       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1118 00:20:20.493195       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1118 00:20:20.493249       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [e34b718143e1] <==
	* I1118 00:21:34.143549       1 shared_informer.go:240] Waiting for caches to sync for TTL after finished
	I1118 00:21:34.200748       1 controllermanager.go:577] Started "daemonset"
	I1118 00:21:34.200959       1 daemon_controller.go:284] Starting daemon sets controller
	I1118 00:21:34.200969       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I1118 00:21:34.203220       1 controllermanager.go:577] Started "tokencleaner"
	I1118 00:21:34.203256       1 tokencleaner.go:118] Starting token cleaner controller
	I1118 00:21:34.203265       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I1118 00:21:34.203322       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I1118 00:21:34.205951       1 controllermanager.go:577] Started "attachdetach"
	I1118 00:21:34.206303       1 attach_detach_controller.go:328] Starting attach detach controller
	I1118 00:21:34.206337       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I1118 00:21:34.215108       1 controllermanager.go:577] Started "horizontalpodautoscaling"
	I1118 00:21:34.215316       1 horizontal.go:169] Starting HPA controller
	I1118 00:21:34.215328       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I1118 00:21:34.217872       1 controllermanager.go:577] Started "ttl"
	I1118 00:21:34.217989       1 ttl_controller.go:121] Starting TTL controller
	I1118 00:21:34.218018       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I1118 00:21:34.220481       1 controllermanager.go:577] Started "bootstrapsigner"
	I1118 00:21:34.220737       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I1118 00:21:34.223299       1 node_ipam_controller.go:91] Sending events to api server.
	W1118 00:21:44.203700       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1118 00:21:44.704666       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1118 00:21:45.705860       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1118 00:21:47.706509       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	E1118 00:21:47.706662       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.49.2:8441/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	
	* 
	* ==> kube-proxy [dec4850aecf1] <==
	* E1118 00:21:28.731178       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976": dial tcp 192.168.49.2:8441: connect: connection refused
	I1118 00:21:32.008634       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1118 00:21:32.008670       1 server_others.go:140] Detected node IP 192.168.49.2
	W1118 00:21:32.008696       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1118 00:21:34.319819       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1118 00:21:34.319893       1 server_others.go:212] Using iptables Proxier.
	I1118 00:21:34.319919       1 server_others.go:219] creating dualStackProxier for iptables.
	W1118 00:21:34.319928       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1118 00:21:34.321812       1 server.go:649] Version: v1.22.3
	I1118 00:21:34.323441       1 config.go:315] Starting service config controller
	I1118 00:21:34.323468       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1118 00:21:34.323734       1 config.go:224] Starting endpoint slice config controller
	I1118 00:21:34.323740       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1118 00:21:34.424028       1 shared_informer.go:247] Caches are synced for service config 
	I1118 00:21:34.424062       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1118 00:21:41.600538       1 trace.go:205] Trace[1902479152]: "iptables restore" (18-Nov-2021 00:21:39.544) (total time: 2056ms):
	Trace[1902479152]: [2.056179659s] [2.056179659s] END
	I1118 00:21:50.818216       1 trace.go:205] Trace[1871746744]: "iptables restore" (18-Nov-2021 00:21:48.331) (total time: 2486ms):
	Trace[1871746744]: [2.486481381s] [2.486481381s] END
	
	* 
	* ==> kube-proxy [e4efee81ca46] <==
	* I1118 00:20:21.651696       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1118 00:20:21.651759       1 server_others.go:140] Detected node IP 192.168.49.2
	W1118 00:20:21.651773       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1118 00:20:23.854981       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1118 00:20:23.855018       1 server_others.go:212] Using iptables Proxier.
	I1118 00:20:23.855027       1 server_others.go:219] creating dualStackProxier for iptables.
	W1118 00:20:23.855038       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1118 00:20:23.855321       1 server.go:649] Version: v1.22.3
	I1118 00:20:23.855763       1 config.go:224] Starting endpoint slice config controller
	I1118 00:20:23.855791       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1118 00:20:23.855808       1 config.go:315] Starting service config controller
	I1118 00:20:23.855814       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1118 00:20:23.956738       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1118 00:20:23.956854       1 shared_informer.go:247] Caches are synced for service config 
	I1118 00:20:47.183178       1 trace.go:205] Trace[886166706]: "iptables restore" (18-Nov-2021 00:20:45.117) (total time: 2065ms):
	Trace[886166706]: [2.065366873s] [2.065366873s] END
	I1118 00:21:09.711497       1 trace.go:205] Trace[1669630311]: "iptables restore" (18-Nov-2021 00:21:07.502) (total time: 2208ms):
	Trace[1669630311]: [2.208983227s] [2.208983227s] END
	
	* 
	* ==> kube-scheduler [250907264d34] <==
	* I1118 00:21:29.418722       1 serving.go:347] Generated self-signed cert in-memory
	W1118 00:21:31.997832       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1118 00:21:31.997895       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1118 00:21:31.997924       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1118 00:21:31.997933       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1118 00:21:32.011623       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1118 00:21:32.011790       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1118 00:21:32.012127       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1118 00:21:32.011814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1118 00:21:32.021075       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021368       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021634       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021713       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021793       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021855       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021896       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I1118 00:21:32.112609       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [9b2f0c8b2294] <==
	* E1118 00:20:03.778476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:03.778649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1118 00:20:03.778756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1118 00:20:03.778356       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1118 00:20:03.778477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:03.778664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1118 00:20:03.778867       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1118 00:20:03.779132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1118 00:20:03.779200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:03.779292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1118 00:20:03.779420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1118 00:20:03.779535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:04.601872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1118 00:20:04.679677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1118 00:20:04.718435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1118 00:20:04.775960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1118 00:20:04.790187       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1118 00:20:04.862294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:04.865410       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:04.897363       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1118 00:20:04.930491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1118 00:20:06.875424       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I1118 00:21:27.208721       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1118 00:21:27.209099       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I1118 00:21:27.209128       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-11-18 00:19:19 UTC, end at Thu 2021-11-18 00:21:51 UTC. --
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:44.325785    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:44.326015    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:44.326044    6078 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:44.429262    6078 status_manager.go:601] "Failed to get status for pod" podUID=0ad7422ab14ae2d4b971f1822a1ff8ef pod="kube-system/kube-scheduler-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:44.462255    6078 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:44.862893    6078 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:44.967156    6078 status_manager.go:601] "Failed to get status for pod" podUID=0ad7422ab14ae2d4b971f1822a1ff8ef pod="kube-system/kube-scheduler-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:44 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:44.967348    6078 status_manager.go:601] "Failed to get status for pod" podUID=06052898487a9eef2760f89d323d2979 pod="kube-system/etcd-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.358730    6078 scope.go:110] "RemoveContainer" containerID="e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:45.359249    6078 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20211117161858-31976_kube-system(3c40bf658f457f3d925e48d646a29704)\"" pod="kube-system/kube-apiserver-functional-20211117161858-31976" podUID=3c40bf658f457f3d925e48d646a29704
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:45.663342    6078 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.819972    6078 status_manager.go:601] "Failed to get status for pod" podUID=9b9ccef0-fccc-43f7-8dec-952d07564964 pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.820291    6078 status_manager.go:601] "Failed to get status for pod" podUID=453e3252-c5a8-48b4-893b-0496a8ed4dec pod="kube-system/kube-proxy-wbv29" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wbv29\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.820757    6078 status_manager.go:601] "Failed to get status for pod" podUID=07d38b3c32289fcb168a5eedbb42a060 pod="kube-system/kube-controller-manager-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.821054    6078 status_manager.go:601] "Failed to get status for pod" podUID=67a186a3-f954-4960-bb9e-57d18527dbc7 pod="kube-system/coredns-78fcd69978-dnq6x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-dnq6x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.821326    6078 status_manager.go:601] "Failed to get status for pod" podUID=0ad7422ab14ae2d4b971f1822a1ff8ef pod="kube-system/kube-scheduler-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.821657    6078 status_manager.go:601] "Failed to get status for pod" podUID=3c40bf658f457f3d925e48d646a29704 pod="kube-system/kube-apiserver-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.821881    6078 status_manager.go:601] "Failed to get status for pod" podUID=06052898487a9eef2760f89d323d2979 pod="kube-system/etcd-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.973843    6078 scope.go:110] "RemoveContainer" containerID="e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:45.974382    6078 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20211117161858-31976_kube-system(3c40bf658f457f3d925e48d646a29704)\"" pod="kube-system/kube-apiserver-functional-20211117161858-31976" podUID=3c40bf658f457f3d925e48d646a29704
	Nov 18 00:21:46 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:46.980435    6078 scope.go:110] "RemoveContainer" containerID="e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335"
	Nov 18 00:21:46 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:46.981056    6078 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20211117161858-31976_kube-system(3c40bf658f457f3d925e48d646a29704)\"" pod="kube-system/kube-apiserver-functional-20211117161858-31976" podUID=3c40bf658f457f3d925e48d646a29704
	Nov 18 00:21:47 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:47.264527    6078 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:47 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:47.989152    6078 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-20211117161858-31976.16b87c15dfe682a4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-20211117161858-31976", UID:"3c40bf658f457f3d925e48d646a29704", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message
:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"functional-20211117161858-31976"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc05d8504782218a4, ext:4253908481, loc:(*time.Location)(0x77a8680)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc05d8504782218a4, ext:4253908481, loc:(*time.Location)(0x77a8680)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Nov 18 00:21:50 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:50.465719    6078 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> storage-provisioner [422236a5e675] <==
	* I1118 00:21:36.639683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1118 00:21:36.650400       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1118 00:21:36.650449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1118 00:21:40.111980       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:44.350512       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:47.946818       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:50.997974       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [ea850ca3cc61] <==
	* I1118 00:20:52.931610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1118 00:20:52.938911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1118 00:20:52.939029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1118 00:20:52.953253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1118 00:20:52.953292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32b2b3be-e39d-44de-bb2d-5d1067722fdc", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20211117161858-31976_1183ee1f-9cef-48cf-8de5-e4dbd61f5221 became leader
	I1118 00:20:52.953495       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20211117161858-31976_1183ee1f-9cef-48cf-8de5-e4dbd61f5221!
	I1118 00:20:53.054549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20211117161858-31976_1183ee1f-9cef-48cf-8de5-e4dbd61f5221!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:21:50.586491   33927 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20211117161858-31976 -n functional-20211117161858-31976
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20211117161858-31976 -n functional-20211117161858-31976: exit status 2 (634.073041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-20211117161858-31976" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (31.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (13.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:752: (dbg) Run:  kubectl --context functional-20211117161858-31976 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:752: (dbg) Done: kubectl --context functional-20211117161858-31976 get po -l tier=control-plane -n kube-system -o=json: (7.868000346s)
functional_test.go:767: etcd phase: Running
functional_test.go:775: etcd is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-11-17 16:20:11 -0800 PST ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc000e937d0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0001aa000} Ready:false RestartCount:1 Image:k8s.gcr.io/etcd:3.5.0-0 ImageID:docker-pullable://k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d ContainerID:docker://4a9c75d87f40be89aa64aeeaf48eee2edbfed6225334ae432dfbdba1093f28b5}]}
functional_test.go:767: kube-apiserver phase: Pending
functional_test.go:769: kube-apiserver is not Running: {Phase:Pending Conditions:[] Message: Reason: HostIP: PodIP: StartTime:<nil> ContainerStatuses:[]}
functional_test.go:767: kube-controller-manager phase: Running
functional_test.go:775: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-11-17 16:20:11 -0800 PST ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc000e93cf8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0001aa070} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-controller-manager:v1.22.3 ImageID:docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:e67dbfd3796b7ce04fee80acb52876928c290224a91862c5849c3ab0fa31ca78 ContainerID:docker://e34b718143e148506fc44d89d96a1a8ec9f06973714a55f8f3b59ff51ade5a13}]}
functional_test.go:767: kube-scheduler phase: Running
functional_test.go:775: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-11-17 16:20:11 -0800 PST ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc000e93ef0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0001aa0e0} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-scheduler:v1.22.3 ImageID:docker-pullable://k8s.gcr.io/kube-scheduler@sha256:cac7ea67201a84c00f3e8d9be51877c25fb539055ac404c4a9d2dd4c79d3fdab ContainerID:docker://250907264d348e83f2dbf05695470c030947e582eba007c6188deaa21b57fcd1}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117161858-31976
helpers_test.go:235: (dbg) docker inspect functional-20211117161858-31976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3",
	        "Created": "2021-11-18T00:19:05.626133858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38790,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-11-18T00:19:17.027751041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/hosts",
	        "LogPath": "/var/lib/docker/containers/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3/d46e9a51de65467f8f95d5caf1ea606f6207ea96f31ad410cb49b500d1a234d3-json.log",
	        "Name": "/functional-20211117161858-31976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20211117161858-31976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20211117161858-31976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099-init/diff:/var/lib/docker/overlay2/a93dcfb6f3d6cc41c972ba77a74f26c33bb647aae37c056960e88eff1f45318e/diff:/var/lib/docker/overlay2/5bf663a55dc098d601b6dea4d4c10aaec9f068dcf0de0b940d77262bf5e9bdc6/diff:/var/lib/docker/overlay2/042de3e4be800f5293bfc3bc6fc92553d872b01461acd16fa5a146a312df0e28/diff:/var/lib/docker/overlay2/0790f68de366f4e5284d9606e1a26055a65a8ee9c04fd59b5bac02d4016cf450/diff:/var/lib/docker/overlay2/0b2d68653092e419e945cb562f07ca719191e8b17667a18a8f7b4c24ad10ab0e/diff:/var/lib/docker/overlay2/74497acbc2dda9790b519bf52aa865acb9f38e5cef76e7d8a3a4b529a3d9e702/diff:/var/lib/docker/overlay2/ff120bb48ef6e6e06a88f4c5187e25d554152cc97e0ea1fb3555f17a66908154/diff:/var/lib/docker/overlay2/f5db8db950342323f76b38613fda86996d2b4a2aa755297267caa5b7b8981da0/diff:/var/lib/docker/overlay2/6017be4153ffd7b1ab22d79efd97895ca9791c09d7e77b930827f1f338219cbb/diff:/var/lib/docker/overlay2/fd8bd2
db0148ff3cbae056e3d40a5e785a6daa9839d28dce541d1db16c76a910/diff:/var/lib/docker/overlay2/6e6b657f7202e480d424be1b6934196a3f4b88d643e66c8f27823833e0833ba4/diff:/var/lib/docker/overlay2/4129ea43aaaf15e7de040c40f26e1a9a163317620d7ef92d98e4b2467d593034/diff:/var/lib/docker/overlay2/fd07546476691a27dba8ff73e418292264e996c20c06e955a30dbad83de1733c/diff:/var/lib/docker/overlay2/5ad0909670349956719e0f0ea9ddd5ee7e8959f505f470f10ac2520aa8014e97/diff:/var/lib/docker/overlay2/8825a434b266c3c834891f42fa35dc89e993f3c9e395f2f3c4d6e815f0e329af/diff:/var/lib/docker/overlay2/b4eeccd1b8c68a280e0e4d881a805d320f55b0c471529a3313b31b47252b0c47/diff:/var/lib/docker/overlay2/35fd48039713604d0debc8ac2009daf167d289893615387c0ad9287bffa10082/diff:/var/lib/docker/overlay2/494facb3e11d8950ad7593c6354187416d14009c69168858a6dffc25ebfbf84c/diff:/var/lib/docker/overlay2/4f1ce1df10039c93e604a552d53e3e6645d372f9cddab1b12c00f0067ad80ba2/diff:/var/lib/docker/overlay2/659379d7d9913fcb5492ef098d76112aded93cb7ce203354f9fbcee82d5b062c/diff:/var/lib/d
ocker/overlay2/a1c5d5e92d294301fbac809907fba5a0acc107e187b93e52d5afb6bb0bc2eb9d/diff:/var/lib/docker/overlay2/5065eba0fcf1a8cf75076e4a123f1e9f038fbbf1fae3f82e3ea33d1523b60c91/diff:/var/lib/docker/overlay2/594d5b999ebec6822417fd4ba02da0a7cde6c024fdfb474db4ff3a0784d7f735/diff:/var/lib/docker/overlay2/067ddba03cf6c6688f887150cb3d7174e90d66e9a6f356e86cc3a906c4941894/diff:/var/lib/docker/overlay2/6cee93a03c4d65017c1ef9d392ae34d531e8f7abdd809dc26a0a48ede1ff8367/diff:/var/lib/docker/overlay2/d1e8cfbc84975893028d0b859ebb9ca07a8efc1c8ad9abc10fb1e9c7235f53d4/diff:/var/lib/docker/overlay2/4f2c513e3b5d4707a2aed9244d7ef9f6fc2631524cf8225ba0dfa2f8c3e3931c/diff:/var/lib/docker/overlay2/9be7da800f4028bec22556081948d1d22da9bb3be2d63dde017c61c44c0274ea/diff:/var/lib/docker/overlay2/58b13aae5c184fe2071d64f90c7955b5dbbe76d225c3bd9847f03d1ce1ec8664/diff:/var/lib/docker/overlay2/997e96d68467fd54763e75e2d501272ee5d0497b00ab2c5522522f8ba0754f07/diff:/var/lib/docker/overlay2/49be1e8263d191ec5fdfd8cd4138af81d23aaa12d6548a465b255afe5e8
819c3/diff:/var/lib/docker/overlay2/8658b7127dfd599a3897343d297be67b4576b31720472927b3f5f1856059c56d/diff:/var/lib/docker/overlay2/26b53a2f30fe8fc01bfec363d6b00e2f5ab4f48325d1b5f62b5d8f2854dda781/diff:/var/lib/docker/overlay2/9abe9a8e38d4d40dcfa6152a2ae1bf2ed14dfa1579245d8534bea68a1124307b/diff:/var/lib/docker/overlay2/a1d4fbf621974c40a62164636c14f0dbe1ffa8a7bc7b13c1995ed93b1113dbe7/diff:/var/lib/docker/overlay2/aecec25c90cb357ced7bbaa69f3135cb6f7c8765605b7a360cd515642e00de14/diff:/var/lib/docker/overlay2/5c55f200070f6d0617d7f3031ce23a27df90a72720139ef51fac61fe40032625/diff:/var/lib/docker/overlay2/7a473817be0962d3e2ae1f57f32e95115af914c56a786f2d4d15a9dca232cefa/diff:/var/lib/docker/overlay2/3ca997de4525080aca8f86ad0f68f4f26acc4262a80846cfc96b3d4af8dd2526/diff:/var/lib/docker/overlay2/ad3ce384b651be2a1810da477a29e598be710b6e40f940a3bb3a4a9ed7ee048d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e0ec7a7f73c1ddbbb9f56f242aeba44c6beb8e8ba4345e2d31970a02d02099/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20211117161858-31976",
	                "Source": "/var/lib/docker/volumes/functional-20211117161858-31976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20211117161858-31976",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20211117161858-31976",
	                "name.minikube.sigs.k8s.io": "functional-20211117161858-31976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "076ac0afe3795f81804d3da602b3a69ab38e221a0122b3f448d185a1ba341295",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52137"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52138"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52140"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52141"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/076ac0afe379",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20211117161858-31976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d46e9a51de65",
	                        "functional-20211117161858-31976"
	                    ],
	                    "NetworkID": "716130e30ae38c1d22de4c84857f1a2addaf8dd4e40f1651d386549d11497de6",
	                    "EndpointID": "dc6711626c432c360f8407c17df6ecba81d669b643cd6e499253a6205087f92f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117161858-31976 -n functional-20211117161858-31976
helpers_test.go:239: (dbg) Done: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117161858-31976 -n functional-20211117161858-31976: (1.320154525s)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs -n 25: (3.150699577s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                          Args                                          |             Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                                                    | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:06 PST | Wed, 17 Nov 2021 16:17:06 PST |
	|         | addons-20211117161126-31976                                                            |                                 |         |         |                               |                               |
	| addons  | disable dashboard -p                                                                   | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:06 PST | Wed, 17 Nov 2021 16:17:06 PST |
	|         | addons-20211117161126-31976                                                            |                                 |         |         |                               |                               |
	| delete  | -p addons-20211117161126-31976                                                         | addons-20211117161126-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:06 PST | Wed, 17 Nov 2021 16:17:14 PST |
	| start   | -p nospam-20211117161714-31976 -n=1 --memory=2250 --wait=false                         | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:17:14 PST | Wed, 17 Nov 2021 16:18:24 PST |
	|         | --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 |                                 |         |         |                               |                               |
	|         | --driver=docker                                                                        |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:29 PST | Wed, 17 Nov 2021 16:18:29 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | pause                                                                                  |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:30 PST | Wed, 17 Nov 2021 16:18:30 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | pause                                                                                  |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:30 PST | Wed, 17 Nov 2021 16:18:31 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | pause                                                                                  |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:31 PST | Wed, 17 Nov 2021 16:18:32 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | unpause                                                                                |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:32 PST | Wed, 17 Nov 2021 16:18:32 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | unpause                                                                                |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:32 PST | Wed, 17 Nov 2021 16:18:33 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | unpause                                                                                |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:33 PST | Wed, 17 Nov 2021 16:18:50 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | stop                                                                                   |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:51 PST | Wed, 17 Nov 2021 16:18:51 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | stop                                                                                   |                                 |         |         |                               |                               |
	| -p      | nospam-20211117161714-31976 --log_dir                                                  | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:51 PST | Wed, 17 Nov 2021 16:18:51 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976           |                                 |         |         |                               |                               |
	|         | stop                                                                                   |                                 |         |         |                               |                               |
	| delete  | -p nospam-20211117161714-31976                                                         | nospam-20211117161714-31976     | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:51 PST | Wed, 17 Nov 2021 16:18:58 PST |
	| start   | -p                                                                                     | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:18:58 PST | Wed, 17 Nov 2021 16:21:01 PST |
	|         | functional-20211117161858-31976                                                        |                                 |         |         |                               |                               |
	|         | --memory=4000                                                                          |                                 |         |         |                               |                               |
	|         | --apiserver-port=8441                                                                  |                                 |         |         |                               |                               |
	|         | --wait=all --driver=docker                                                             |                                 |         |         |                               |                               |
	| start   | -p                                                                                     | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:01 PST | Wed, 17 Nov 2021 16:21:09 PST |
	|         | functional-20211117161858-31976                                                        |                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                                 |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976 cache add                                              | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:11 PST | Wed, 17 Nov 2021 16:21:13 PST |
	|         | minikube-local-cache-test:functional-20211117161858-31976                              |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976 cache delete                                           | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:13 PST | Wed, 17 Nov 2021 16:21:13 PST |
	|         | minikube-local-cache-test:functional-20211117161858-31976                              |                                 |         |         |                               |                               |
	| cache   | list                                                                                   | minikube                        | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:13 PST | Wed, 17 Nov 2021 16:21:13 PST |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:13 PST | Wed, 17 Nov 2021 16:21:14 PST |
	|         | ssh sudo crictl images                                                                 |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:15 PST | Wed, 17 Nov 2021 16:21:15 PST |
	|         | cache reload                                                                           |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:17 PST | Wed, 17 Nov 2021 16:21:19 PST |
	|         | logs -n 25                                                                             |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:20 PST | Wed, 17 Nov 2021 16:21:21 PST |
	|         | kubectl -- --context                                                                   |                                 |         |         |                               |                               |
	|         | functional-20211117161858-31976                                                        |                                 |         |         |                               |                               |
	|         | get pods                                                                               |                                 |         |         |                               |                               |
	| kubectl | --profile=functional-20211117161858-31976                                              | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:21 PST | Wed, 17 Nov 2021 16:21:21 PST |
	|         | -- --context                                                                           |                                 |         |         |                               |                               |
	|         | functional-20211117161858-31976 get pods                                               |                                 |         |         |                               |                               |
	| -p      | functional-20211117161858-31976                                                        | functional-20211117161858-31976 | jenkins | v1.24.0 | Wed, 17 Nov 2021 16:21:49 PST | Wed, 17 Nov 2021 16:21:52 PST |
	|         | logs -n 25                                                                             |                                 |         |         |                               |                               |
	|---------|----------------------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 16:21:21
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 16:21:21.622885   33790 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:21:21.623009   33790 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:21:21.623011   33790 out.go:310] Setting ErrFile to fd 2...
	I1117 16:21:21.623013   33790 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:21:21.623084   33790 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 16:21:21.623330   33790 out.go:304] Setting JSON to false
	I1117 16:21:21.649303   33790 start.go:112] hostinfo: {"hostname":"37310.local","uptime":8456,"bootTime":1637186425,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:21:21.649404   33790 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:21:21.676374   33790 out.go:176] * [functional-20211117161858-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 16:21:21.676542   33790 notify.go:174] Checking for updates...
	I1117 16:21:21.702052   33790 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 16:21:21.728239   33790 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 16:21:21.753862   33790 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 16:21:21.779964   33790 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 16:21:21.780393   33790 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:21:21.780436   33790 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 16:21:21.881721   33790 docker.go:132] docker version: linux-20.10.6
	I1117 16:21:21.881850   33790 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:21:22.071842   33790 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-18 00:21:22.004732313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:21:22.120163   33790 out.go:176] * Using the docker driver based on existing profile
	I1117 16:21:22.120221   33790 start.go:280] selected driver: docker
	I1117 16:21:22.120231   33790 start.go:775] validating driver "docker" against &{Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAdd
onRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:21:22.120357   33790 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 16:21:22.120726   33790 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:21:22.310085   33790 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-18 00:21:22.241494137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:21:22.312056   33790 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 16:21:22.312086   33790 cni.go:93] Creating CNI manager for ""
	I1117 16:21:22.312094   33790 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:21:22.312101   33790 start_flags.go:282] config:
	{Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddo
nRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:21:22.360734   33790 out.go:176] * Starting control plane node functional-20211117161858-31976 in cluster functional-20211117161858-31976
	I1117 16:21:22.360819   33790 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 16:21:22.386749   33790 out.go:176] * Pulling base image ...
	I1117 16:21:22.386821   33790 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 16:21:22.386901   33790 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 16:21:22.386931   33790 cache.go:57] Caching tarball of preloaded images
	I1117 16:21:22.386925   33790 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 16:21:22.387177   33790 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 16:21:22.387208   33790 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 16:21:22.387966   33790 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/config.json ...
	I1117 16:21:22.514468   33790 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 16:21:22.514485   33790 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 16:21:22.514497   33790 cache.go:206] Successfully downloaded all kic artifacts
	I1117 16:21:22.514541   33790 start.go:313] acquiring machines lock for functional-20211117161858-31976: {Name:mkf7e5ee0db2d67009702787d2639dd998f1b20a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:21:22.514618   33790 start.go:317] acquired machines lock for "functional-20211117161858-31976" in 62.221µs
	I1117 16:21:22.514641   33790 start.go:93] Skipping create...Using existing machine configuration
	I1117 16:21:22.514648   33790 fix.go:55] fixHost starting: 
	I1117 16:21:22.514907   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:22.635958   33790 fix.go:108] recreateIfNeeded on functional-20211117161858-31976: state=Running err=<nil>
	W1117 16:21:22.635981   33790 fix.go:134] unexpected machine state, will restart: <nil>
	I1117 16:21:22.662731   33790 out.go:176] * Updating the running docker "functional-20211117161858-31976" container ...
	I1117 16:21:22.662759   33790 machine.go:88] provisioning docker machine ...
	I1117 16:21:22.662778   33790 ubuntu.go:169] provisioning hostname "functional-20211117161858-31976"
	I1117 16:21:22.662854   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:22.782815   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:22.782997   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:22.783006   33790 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20211117161858-31976 && echo "functional-20211117161858-31976" | sudo tee /etc/hostname
	I1117 16:21:22.902454   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20211117161858-31976
	
	I1117 16:21:22.902543   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:23.024332   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:23.024486   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:23.024499   33790 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20211117161858-31976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20211117161858-31976/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20211117161858-31976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1117 16:21:23.134580   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:21:23.134623   33790 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube}
	I1117 16:21:23.134651   33790 ubuntu.go:177] setting up certificates
	I1117 16:21:23.134667   33790 provision.go:83] configureAuth start
	I1117 16:21:23.134760   33790 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211117161858-31976
	I1117 16:21:23.262120   33790 provision.go:138] copyHostCerts
	I1117 16:21:23.262204   33790 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem, removing ...
	I1117 16:21:23.262209   33790 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem
	I1117 16:21:23.262305   33790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem (1123 bytes)
	I1117 16:21:23.262507   33790 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem, removing ...
	I1117 16:21:23.262516   33790 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem
	I1117 16:21:23.262571   33790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem (1679 bytes)
	I1117 16:21:23.262708   33790 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem, removing ...
	I1117 16:21:23.262711   33790 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem
	I1117 16:21:23.262777   33790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem (1078 bytes)
	I1117 16:21:23.262901   33790 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem org=jenkins.functional-20211117161858-31976 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20211117161858-31976]
	I1117 16:21:23.387784   33790 provision.go:172] copyRemoteCerts
	I1117 16:21:23.387849   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1117 16:21:23.387901   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:23.508062   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:23.598162   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1117 16:21:23.614485   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1117 16:21:23.631096   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1117 16:21:23.647617   33790 provision.go:86] duration metric: configureAuth took 512.923631ms
	I1117 16:21:23.647626   33790 ubuntu.go:193] setting minikube options for container-runtime
	I1117 16:21:23.647791   33790 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:21:23.647858   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:23.767974   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:23.768127   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:23.768134   33790 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1117 16:21:23.881322   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1117 16:21:23.881336   33790 ubuntu.go:71] root file system type: overlay
	I1117 16:21:23.881520   33790 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1117 16:21:23.881615   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.000288   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:24.000448   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:24.000493   33790 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1117 16:21:24.119293   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1117 16:21:24.119389   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.237960   33790 main.go:130] libmachine: Using SSH client type: native
	I1117 16:21:24.238130   33790 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396d40] 0x1399e20 <nil>  [] 0s} 127.0.0.1 52137 <nil> <nil>}
	I1117 16:21:24.238140   33790 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1117 16:21:24.352811   33790 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:21:24.352825   33790 machine.go:91] provisioned docker machine in 1.690013418s
	I1117 16:21:24.352833   33790 start.go:267] post-start starting for "functional-20211117161858-31976" (driver="docker")
	I1117 16:21:24.352836   33790 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1117 16:21:24.352919   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1117 16:21:24.352978   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.472328   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:24.555003   33790 ssh_runner.go:152] Run: cat /etc/os-release
	I1117 16:21:24.558639   33790 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1117 16:21:24.558652   33790 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1117 16:21:24.558660   33790 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1117 16:21:24.558665   33790 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1117 16:21:24.558672   33790 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/addons for local assets ...
	I1117 16:21:24.558765   33790 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files for local assets ...
	I1117 16:21:24.558951   33790 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem -> 319762.pem in /etc/ssl/certs
	I1117 16:21:24.559095   33790 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/test/nested/copy/31976/hosts -> hosts in /etc/test/nested/copy/31976
	I1117 16:21:24.559142   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/31976
	I1117 16:21:24.566252   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem --> /etc/ssl/certs/319762.pem (1708 bytes)
	I1117 16:21:24.582991   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/test/nested/copy/31976/hosts --> /etc/test/nested/copy/31976/hosts (40 bytes)
	I1117 16:21:24.599708   33790 start.go:270] post-start completed in 246.860649ms
	I1117 16:21:24.599784   33790 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:21:24.599840   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.718037   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:24.797768   33790 fix.go:57] fixHost completed within 2.283044439s
	I1117 16:21:24.797784   33790 start.go:80] releasing machines lock for "functional-20211117161858-31976", held for 2.283095634s
	I1117 16:21:24.797900   33790 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211117161858-31976
	I1117 16:21:24.917591   33790 ssh_runner.go:152] Run: systemctl --version
	I1117 16:21:24.917601   33790 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1117 16:21:24.917658   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:24.917672   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:25.046625   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:25.046759   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:25.593369   33790 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I1117 16:21:25.602997   33790 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I1117 16:21:25.612754   33790 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I1117 16:21:25.612817   33790 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1117 16:21:25.621861   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 16:21:25.634265   33790 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I1117 16:21:25.711219   33790 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I1117 16:21:25.789866   33790 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I1117 16:21:25.800286   33790 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1117 16:21:25.877362   33790 ssh_runner.go:152] Run: sudo systemctl start docker
	I1117 16:21:25.887091   33790 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I1117 16:21:25.925452   33790 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I1117 16:21:25.992258   33790 out.go:203] * Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
	I1117 16:21:25.992393   33790 cli_runner.go:115] Run: docker exec -t functional-20211117161858-31976 dig +short host.docker.internal
	I1117 16:21:26.182933   33790 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1117 16:21:26.183029   33790 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1117 16:21:26.187212   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:26.350225   33790 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1117 16:21:26.350377   33790 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 16:21:26.350545   33790 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:21:26.382564   33790 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.3
	k8s.gcr.io/kube-scheduler:v1.22.3
	k8s.gcr.io/kube-controller-manager:v1.22.3
	k8s.gcr.io/kube-proxy:v1.22.3
	minikube-local-cache-test:functional-20211117161858-31976
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1117 16:21:26.382573   33790 docker.go:489] Images already preloaded, skipping extraction
	I1117 16:21:26.382653   33790 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:21:26.412928   33790 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.3
	k8s.gcr.io/kube-scheduler:v1.22.3
	k8s.gcr.io/kube-controller-manager:v1.22.3
	k8s.gcr.io/kube-proxy:v1.22.3
	minikube-local-cache-test:functional-20211117161858-31976
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1117 16:21:26.412940   33790 cache_images.go:79] Images are preloaded, skipping loading
	I1117 16:21:26.413025   33790 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I1117 16:21:26.493364   33790 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1117 16:21:26.493384   33790 cni.go:93] Creating CNI manager for ""
	I1117 16:21:26.493390   33790 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:21:26.493393   33790 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1117 16:21:26.493408   33790 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20211117161858-31976 NodeName:functional-20211117161858-31976 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1117 16:21:26.493510   33790 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20211117161858-31976"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1117 16:21:26.493599   33790 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20211117161858-31976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1117 16:21:26.493666   33790 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3
	I1117 16:21:26.501690   33790 binaries.go:44] Found k8s binaries, skipping transfer
	I1117 16:21:26.501744   33790 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1117 16:21:26.508804   33790 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I1117 16:21:26.521182   33790 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1117 16:21:26.533276   33790 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1924 bytes)
	I1117 16:21:26.554949   33790 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1117 16:21:26.559258   33790 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976 for IP: 192.168.49.2
	I1117 16:21:26.559460   33790 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.key
	I1117 16:21:26.559521   33790 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.key
	I1117 16:21:26.559652   33790 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.key
	I1117 16:21:26.559720   33790 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/apiserver.key.dd3b5fb2
	I1117 16:21:26.559771   33790 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/proxy-client.key
	I1117 16:21:26.559974   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/31976.pem (1338 bytes)
	W1117 16:21:26.560019   33790 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/31976_empty.pem, impossibly tiny 0 bytes
	I1117 16:21:26.560033   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem (1679 bytes)
	I1117 16:21:26.560072   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem (1078 bytes)
	I1117 16:21:26.560110   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem (1123 bytes)
	I1117 16:21:26.560155   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem (1679 bytes)
	I1117 16:21:26.560217   33790 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem (1708 bytes)
	I1117 16:21:26.561014   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1117 16:21:26.582083   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1117 16:21:26.599821   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1117 16:21:26.636540   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1117 16:21:26.653473   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1117 16:21:26.670129   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1117 16:21:26.686208   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1117 16:21:26.702853   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1117 16:21:26.719475   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1117 16:21:26.736305   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/31976.pem --> /usr/share/ca-certificates/31976.pem (1338 bytes)
	I1117 16:21:26.753116   33790 ssh_runner.go:319] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/319762.pem --> /usr/share/ca-certificates/319762.pem (1708 bytes)
	I1117 16:21:26.770927   33790 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1117 16:21:26.783126   33790 ssh_runner.go:152] Run: openssl version
	I1117 16:21:26.788339   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31976.pem && ln -fs /usr/share/ca-certificates/31976.pem /etc/ssl/certs/31976.pem"
	I1117 16:21:26.795960   33790 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/31976.pem
	I1117 16:21:26.799692   33790 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 18 00:18 /usr/share/ca-certificates/31976.pem
	I1117 16:21:26.799736   33790 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31976.pem
	I1117 16:21:26.804943   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/31976.pem /etc/ssl/certs/51391683.0"
	I1117 16:21:26.812809   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/319762.pem && ln -fs /usr/share/ca-certificates/319762.pem /etc/ssl/certs/319762.pem"
	I1117 16:21:26.820966   33790 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/319762.pem
	I1117 16:21:26.825189   33790 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 18 00:18 /usr/share/ca-certificates/319762.pem
	I1117 16:21:26.825239   33790 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/319762.pem
	I1117 16:21:26.830525   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/319762.pem /etc/ssl/certs/3ec20f2e.0"
	I1117 16:21:26.838070   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1117 16:21:26.846059   33790 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:21:26.850266   33790 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 18 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:21:26.850311   33790 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:21:26.855877   33790 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1117 16:21:26.863048   33790 kubeadm.go:390] StartCluster: {Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:fal
se volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:21:26.863171   33790 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1117 16:21:26.891230   33790 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1117 16:21:26.899485   33790 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1117 16:21:26.899495   33790 kubeadm.go:600] restartCluster start
	I1117 16:21:26.899552   33790 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1117 16:21:26.906674   33790 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1117 16:21:26.906751   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:27.028837   33790 kubeconfig.go:92] found "functional-20211117161858-31976" server: "https://127.0.0.1:52141"
	I1117 16:21:27.032037   33790 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1117 16:21:27.040014   33790 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-11-18 00:19:49.283585838 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-11-18 00:21:26.562255672 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1117 16:21:27.040021   33790 kubeadm.go:1032] stopping kube-system containers ...
	I1117 16:21:27.040101   33790 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1117 16:21:27.071071   33790 docker.go:390] Stopping containers: [ea850ca3cc61 d0d3e0dca5e9 41b6ab69a384 0e35560df59b e4efee81ca46 26ff204bef53 7d514020b6b7 9b2f0c8b2294 57f34cc17d7a acade1ad68d7 5c6c4dc475e0 9bad689182de 72ff4c2179fb e30cdc66639d c595f5f22c80]
	I1117 16:21:27.071168   33790 ssh_runner.go:152] Run: docker stop ea850ca3cc61 d0d3e0dca5e9 41b6ab69a384 0e35560df59b e4efee81ca46 26ff204bef53 7d514020b6b7 9b2f0c8b2294 57f34cc17d7a acade1ad68d7 5c6c4dc475e0 9bad689182de 72ff4c2179fb e30cdc66639d c595f5f22c80
	I1117 16:21:32.228295   33790 ssh_runner.go:192] Completed: docker stop ea850ca3cc61 d0d3e0dca5e9 41b6ab69a384 0e35560df59b e4efee81ca46 26ff204bef53 7d514020b6b7 9b2f0c8b2294 57f34cc17d7a acade1ad68d7 5c6c4dc475e0 9bad689182de 72ff4c2179fb e30cdc66639d c595f5f22c80: (5.156941798s)
	I1117 16:21:32.228388   33790 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I1117 16:21:32.273365   33790 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1117 16:21:32.281095   33790 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Nov 18 00:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov 18 00:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Nov 18 00:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov 18 00:19 /etc/kubernetes/scheduler.conf
	
	I1117 16:21:32.281148   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1117 16:21:32.288879   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1117 16:21:32.296147   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1117 16:21:32.303562   33790 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1117 16:21:32.303622   33790 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1117 16:21:32.312025   33790 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1117 16:21:32.319391   33790 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1117 16:21:32.319450   33790 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1117 16:21:32.326292   33790 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1117 16:21:32.333970   33790 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1117 16:21:32.333977   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:32.380483   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.540088   33790 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159560405s)
	I1117 16:21:33.540097   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.673891   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.726579   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:33.795988   33790 api_server.go:51] waiting for apiserver process to appear ...
	I1117 16:21:33.796057   33790 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 16:21:33.812813   33790 api_server.go:71] duration metric: took 16.830404ms to wait for apiserver process to appear ...
	I1117 16:21:33.812822   33790 api_server.go:87] waiting for apiserver healthz status ...
	I1117 16:21:33.812833   33790 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52141/healthz ...
	I1117 16:21:33.819102   33790 api_server.go:266] https://127.0.0.1:52141/healthz returned 200:
	ok
	I1117 16:21:33.826158   33790 api_server.go:140] control plane version: v1.22.3
	I1117 16:21:33.826169   33790 api_server.go:130] duration metric: took 13.343873ms to wait for apiserver health ...
	I1117 16:21:33.826174   33790 cni.go:93] Creating CNI manager for ""
	I1117 16:21:33.826178   33790 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:21:33.826185   33790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1117 16:21:33.836317   33790 system_pods.go:59] 7 kube-system pods found
	I1117 16:21:33.836331   33790 system_pods.go:61] "coredns-78fcd69978-dnq6x" [67a186a3-f954-4960-bb9e-57d18527dbc7] Running
	I1117 16:21:33.836337   33790 system_pods.go:61] "etcd-functional-20211117161858-31976" [0591e3ee-239f-402b-a882-d725460bb901] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1117 16:21:33.836340   33790 system_pods.go:61] "kube-apiserver-functional-20211117161858-31976" [e7056857-2ed9-4d76-8caa-79910d7b601e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1117 16:21:33.836348   33790 system_pods.go:61] "kube-controller-manager-functional-20211117161858-31976" [7fd136e2-3546-4ed5-a212-51641f6cb3d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1117 16:21:33.836351   33790 system_pods.go:61] "kube-proxy-wbv29" [453e3252-c5a8-48b4-893b-0496a8ed4dec] Running
	I1117 16:21:33.836353   33790 system_pods.go:61] "kube-scheduler-functional-20211117161858-31976" [709bdcfd-0135-490f-9d56-c5ad014aab58] Running
	I1117 16:21:33.836358   33790 system_pods.go:61] "storage-provisioner" [9b9ccef0-fccc-43f7-8dec-952d07564964] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1117 16:21:33.836361   33790 system_pods.go:74] duration metric: took 10.173348ms to wait for pod list to return data ...
	I1117 16:21:33.836365   33790 node_conditions.go:102] verifying NodePressure condition ...
	I1117 16:21:33.840538   33790 node_conditions.go:122] node storage ephemeral capacity is 123591232Ki
	I1117 16:21:33.840552   33790 node_conditions.go:123] node cpu capacity is 6
	I1117 16:21:33.840563   33790 node_conditions.go:105] duration metric: took 4.193776ms to run NodePressure ...
	I1117 16:21:33.840571   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1117 16:21:34.196195   33790 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1117 16:21:34.200872   33790 kubeadm.go:746] kubelet initialised
	I1117 16:21:34.200878   33790 kubeadm.go:747] duration metric: took 4.672789ms waiting for restarted kubelet to initialise ...
	I1117 16:21:34.200885   33790 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1117 16:21:34.206372   33790 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-dnq6x" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.216319   33790 pod_ready.go:92] pod "coredns-78fcd69978-dnq6x" in "kube-system" namespace has status "Ready":"True"
	I1117 16:21:34.216325   33790 pod_ready.go:81] duration metric: took 9.941793ms waiting for pod "coredns-78fcd69978-dnq6x" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.216336   33790 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.733695   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "etcd-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.733705   33790 pod_ready.go:81] duration metric: took 517.348753ms waiting for pod "etcd-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:34.733709   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "etcd-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.733723   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.738314   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.738320   33790 pod_ready.go:81] duration metric: took 4.593288ms waiting for pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:34.738325   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-apiserver-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.738332   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:34.742721   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.742728   33790 pod_ready.go:81] duration metric: took 4.392273ms waiting for pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:34.742733   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-controller-manager-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:34.742742   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbv29" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:35.029336   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-proxy-wbv29" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.029342   33790 pod_ready.go:81] duration metric: took 286.584397ms waiting for pod "kube-proxy-wbv29" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:35.029346   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-proxy-wbv29" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.029360   33790 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	I1117 16:21:35.432286   33790 pod_ready.go:97] node "functional-20211117161858-31976" hosting pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.432297   33790 pod_ready.go:81] duration metric: took 402.92083ms waiting for pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace to be "Ready" ...
	E1117 16:21:35.432302   33790 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211117161858-31976" hosting pod "kube-scheduler-functional-20211117161858-31976" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:35.432314   33790 pod_ready.go:38] duration metric: took 1.231387704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1117 16:21:35.432326   33790 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1117 16:21:35.448332   33790 ops.go:34] apiserver oom_adj: -16
	I1117 16:21:35.448338   33790 kubeadm.go:604] restartCluster took 8.54859145s
	I1117 16:21:35.448342   33790 kubeadm.go:392] StartCluster complete in 8.585052624s
	I1117 16:21:35.448355   33790 settings.go:142] acquiring lock: {Name:mk2452f58907cab9912f4bf05149d18acb236e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:21:35.448438   33790 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 16:21:35.448901   33790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig: {Name:mk7b88dea9e1cf642f59443febe00ec01446b401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:21:35.454603   33790 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20211117161858-31976" rescaled to 1
	I1117 16:21:35.454631   33790 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 16:21:35.454642   33790 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1117 16:21:35.454677   33790 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1117 16:21:35.482280   33790 out.go:176] * Verifying Kubernetes components...
	I1117 16:21:35.454823   33790 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:21:35.482339   33790 addons.go:65] Setting default-storageclass=true in profile "functional-20211117161858-31976"
	I1117 16:21:35.482342   33790 addons.go:65] Setting storage-provisioner=true in profile "functional-20211117161858-31976"
	I1117 16:21:35.482355   33790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20211117161858-31976"
	I1117 16:21:35.482360   33790 addons.go:153] Setting addon storage-provisioner=true in "functional-20211117161858-31976"
	I1117 16:21:35.482365   33790 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	W1117 16:21:35.482365   33790 addons.go:165] addon storage-provisioner should already be in state true
	I1117 16:21:35.482397   33790 host.go:66] Checking if "functional-20211117161858-31976" exists ...
	I1117 16:21:35.482835   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:35.503648   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:35.517807   33790 start.go:719] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1117 16:21:35.517901   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:35.651758   33790 addons.go:153] Setting addon default-storageclass=true in "functional-20211117161858-31976"
	W1117 16:21:35.651773   33790 addons.go:165] addon default-storageclass should already be in state true
	I1117 16:21:35.651786   33790 host.go:66] Checking if "functional-20211117161858-31976" exists ...
	I1117 16:21:35.652195   33790 cli_runner.go:115] Run: docker container inspect functional-20211117161858-31976 --format={{.State.Status}}
	I1117 16:21:35.688652   33790 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 16:21:35.688763   33790 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 16:21:35.688779   33790 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1117 16:21:35.688858   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:35.694743   33790 node_ready.go:35] waiting up to 6m0s for node "functional-20211117161858-31976" to be "Ready" ...
	I1117 16:21:35.793431   33790 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1117 16:21:35.793440   33790 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1117 16:21:35.793530   33790 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117161858-31976
	I1117 16:21:35.827874   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:35.921703   33790 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 16:21:35.934689   33790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52137 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/functional-20211117161858-31976/id_rsa Username:docker}
	I1117 16:21:36.037457   33790 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1117 16:21:36.223180   33790 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1117 16:21:36.223193   33790 addons.go:417] enableAddons completed in 768.519889ms
	I1117 16:21:37.704232   33790 node_ready.go:58] node "functional-20211117161858-31976" has status "Ready":"False"
	I1117 16:21:48.248672   33790 node_ready.go:53] error getting node "functional-20211117161858-31976": an error on the server ("") has prevented the request from succeeding (get nodes functional-20211117161858-31976)
	I1117 16:21:48.248680   33790 node_ready.go:38] duration metric: took 12.55354997s waiting for node "functional-20211117161858-31976" to be "Ready" ...
	I1117 16:21:48.274481   33790 out.go:176] 
	W1117 16:21:48.274560   33790 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20211117161858-31976": an error on the server ("") has prevented the request from succeeding (get nodes functional-20211117161858-31976)
	W1117 16:21:48.274567   33790 out.go:241] * 
	W1117 16:21:48.275136   33790 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2021-11-18 00:19:19 UTC, end at Thu 2021-11-18 00:22:03 UTC. --
	Nov 18 00:19:46 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:19:46.215832592Z" level=info msg="Daemon has completed initialization"
	Nov 18 00:19:46 functional-20211117161858-31976 systemd[1]: Started Docker Application Container Engine.
	Nov 18 00:19:46 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:19:46.238854763Z" level=info msg="API listen on [::]:2376"
	Nov 18 00:19:46 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:19:46.241903110Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 18 00:20:31 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:20:31.951567126Z" level=info msg="ignoring event" container=297f7a37e8bd7e92a66de64cccc099099267fca4f04ebdb80560c3fcd6134577 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:20:31 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:20:31.995327419Z" level=info msg="ignoring event" container=1a443d1f744d39173e81d045ed1ce442d19760861b9b4307dc4c605d2085949d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:20:52 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:20:52.253052227Z" level=info msg="ignoring event" container=d0d3e0dca5e9300eb2fb46ca47b9bcb6ec63a0aad06b59cb5d507a3a4e10680c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.295680499Z" level=info msg="ignoring event" container=41b6ab69a384cb2e2ddb1f58cec2f1e85d3f1eb993943f1b6072c82a98fd06b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.304131363Z" level=info msg="ignoring event" container=e4efee81ca4695061c2b6dbf156d12f268077143c8eb277bd7100d8ef2b2c746 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.317333478Z" level=info msg="ignoring event" container=acade1ad68d757b68fa1e074e64d4bcad38eb847718506af25d18d956f2802ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.317377663Z" level=info msg="ignoring event" container=7d514020b6b739c237d0dfbeed599f8d20fe29b82c0c692c1fd395c603db5d4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.318184856Z" level=info msg="ignoring event" container=9bad689182deb3a3a60cf2e56ad1100208a956315916a74e9011474dce7d993e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.328384627Z" level=info msg="ignoring event" container=72ff4c2179fba02d43ccc433933719e2c2c383ce806bfdeb133a5c8c986a42f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.400303630Z" level=info msg="ignoring event" container=ea850ca3cc61f91a1d9499307e0c2772fec37eaae99f198afab1abd75c47845b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.401539547Z" level=info msg="ignoring event" container=c595f5f22c8069dac10251914290d0bd3bffa5e83e842b2b2e5d28f8490bb26f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.404561545Z" level=info msg="ignoring event" container=26ff204bef53fc0dd18974e371dadc54ca7dcdf9d710d3665b61c303bddcee0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.421254142Z" level=info msg="ignoring event" container=e30cdc66639da264af9585cfacae2cd0521fceedefac10b6fcfb5737ba3a87c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:27 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:27.430842555Z" level=info msg="ignoring event" container=57f34cc17d7a6aae9f8632a13b459232478f514fe0f9238e9f12307474be2aeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:28 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:28.107342437Z" level=info msg="ignoring event" container=5c6c4dc475e0660b8b2222acba60032c7e49bb624a8c38a53e34b1ebba19402e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:28 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:28.118865919Z" level=info msg="ignoring event" container=9b2f0c8b2294b600fc96d434db4d07350341a1416663656abaf0eabe13f7007c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:32 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:32.227573401Z" level=info msg="ignoring event" container=0e35560df59bfc3e72f16462d40b50c0480ce4e80d80028a8c867f8f9cf914ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:36 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:36.535503755Z" level=info msg="ignoring event" container=2ff1bb3b6c9371822d485e55a78074e668fc4461c35828f4d9ea383e3af000a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:37 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:37.813734943Z" level=info msg="ignoring event" container=e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:37 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:37.885685250Z" level=info msg="ignoring event" container=2e79e79749173295007690797003e6e7aadbc13edb87be79e18ab291e2222c62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 18 00:21:37 functional-20211117161858-31976 dockerd[468]: time="2021-11-18T00:21:37.910686858Z" level=info msg="ignoring event" container=622e4ca9307f2db3d4eff6dbe6895f584fdeac51698c6b276894bf22bcc4af05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	28d32fce294d7       53224b502ea4d       6 seconds ago        Running             kube-apiserver            2                   f8f1e147723bf
	e180853e41903       53224b502ea4d       26 seconds ago       Exited              kube-apiserver            1                   f8f1e147723bf
	fbf9821ffb084       8d147537fb7d1       26 seconds ago       Running             coredns                   1                   47dbcdbbcbb52
	422236a5e675b       6e38f40d628db       27 seconds ago       Running             storage-provisioner       2                   250d98e52e33e
	250907264d348       0aa9c7e31d307       35 seconds ago       Running             kube-scheduler            1                   6e098b42d6ec1
	e34b718143e14       05c905cef780c       35 seconds ago       Running             kube-controller-manager   1                   8d617fb091a41
	4a9c75d87f40b       0048118155842       35 seconds ago       Running             etcd                      1                   dbf91b2d10759
	dec4850aecf1d       6120bd723dced       35 seconds ago       Running             kube-proxy                1                   1357ef914aa3f
	ea850ca3cc61f       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   41b6ab69a384c
	0e35560df59bf       8d147537fb7d1       About a minute ago   Exited              coredns                   0                   7d514020b6b73
	e4efee81ca469       6120bd723dced       About a minute ago   Exited              kube-proxy                0                   26ff204bef53f
	9b2f0c8b2294b       0aa9c7e31d307       2 minutes ago        Exited              kube-scheduler            0                   9bad689182deb
	57f34cc17d7a6       05c905cef780c       2 minutes ago        Exited              kube-controller-manager   0                   72ff4c2179fba
	acade1ad68d75       0048118155842       2 minutes ago        Exited              etcd                      0                   c595f5f22c806
	
	* 
	* ==> coredns [0e35560df59b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [fbf9821ffb08] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	W1118 00:21:37.886846       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1118 00:21:37.886923       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1118 00:21:37.886940       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	E1118 00:21:38.740403       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:39.028112       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:39.237619       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:40.592244       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:40.783828       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:41.320049       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:45.185752       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:45.421249       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:46.568514       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:53.858305       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:54.022869       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:55.979621       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=525": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20211117161858-31976
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20211117161858-31976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b7b0a42f687dae576880a10f0aa2f899d9174438
	                    minikube.k8s.io/name=functional-20211117161858-31976
	                    minikube.k8s.io/updated_at=2021_11_17T16_20_07_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 18 Nov 2021 00:20:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20211117161858-31976
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 18 Nov 2021 00:21:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 18 Nov 2021 00:21:34 +0000   Thu, 18 Nov 2021 00:20:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 18 Nov 2021 00:21:34 +0000   Thu, 18 Nov 2021 00:20:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 18 Nov 2021 00:21:34 +0000   Thu, 18 Nov 2021 00:20:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 18 Nov 2021 00:21:34 +0000   Thu, 18 Nov 2021 00:21:34 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20211117161858-31976
	Capacity:
	  cpu:                6
	  ephemeral-storage:  123591232Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  123591232Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
	  System UUID:                e68caa8e-8163-46fe-ae02-a5e7cce38646
	  Boot ID:                    2574929b-a85a-4f3c-934f-b3f13d66b47f
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.3
	  Kube-Proxy Version:         v1.22.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-dnq6x                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     104s
	  kube-system                 etcd-functional-20211117161858-31976                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         112s
	  kube-system                 kube-apiserver-functional-20211117161858-31976             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-controller-manager-functional-20211117161858-31976    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-proxy-wbv29                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-functional-20211117161858-31976             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (12%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From        Message
	  ----    ------                   ----                  ----        -------
	  Normal  Starting                 29s                   kube-proxy  
	  Normal  Starting                 99s                   kube-proxy  
	  Normal  NodeHasNoDiskPressure    2m9s (x5 over 2m10s)  kubelet     Node functional-20211117161858-31976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x5 over 2m10s)  kubelet     Node functional-20211117161858-31976 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m7s (x6 over 2m10s)  kubelet     Node functional-20211117161858-31976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientMemory  116s                  kubelet     Node functional-20211117161858-31976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                  kubelet     Node functional-20211117161858-31976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                  kubelet     Node functional-20211117161858-31976 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 116s                  kubelet     Starting kubelet.
	  Normal  NodeReady                106s                  kubelet     Node functional-20211117161858-31976 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29s                   kubelet     Node functional-20211117161858-31976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                   kubelet     Node functional-20211117161858-31976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                   kubelet     Node functional-20211117161858-31976 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                   kubelet     Starting kubelet.
	  Normal  NodeNotReady             29s                   kubelet     Node functional-20211117161858-31976 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  29s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.028934] bpfilter: read fail 0
	[  +0.025120] bpfilter: read fail 0
	[  +0.028036] bpfilter: read fail 0
	[  +0.030521] bpfilter: read fail 0
	[  +0.025914] bpfilter: read fail 0
	[  +0.037066] bpfilter: read fail 0
	[  +0.044538] bpfilter: read fail 0
	[  +0.023912] bpfilter: read fail 0
	[  +0.034062] bpfilter: read fail 0
	[  +0.033394] bpfilter: read fail 0
	[  +0.034824] bpfilter: write fail -32
	[  +0.028619] bpfilter: write fail -32
	[  +0.026012] bpfilter: read fail 0
	[  +0.034679] bpfilter: write fail -32
	[  +0.033542] bpfilter: read fail 0
	[  +0.030549] bpfilter: write fail -32
	[  +0.030916] bpfilter: read fail 0
	[  +0.024932] bpfilter: read fail 0
	[  +0.030508] bpfilter: read fail 0
	[  +0.042210] bpfilter: read fail 0
	[  +0.025433] bpfilter: read fail 0
	[  +0.037375] bpfilter: write fail -32
	[  +0.033166] bpfilter: write fail -32
	[  +0.041518] bpfilter: read fail 0
	[  +0.031775] bpfilter: write fail -32
	
	* 
	* ==> etcd [4a9c75d87f40] <==
	* {"level":"info","ts":"2021-11-18T00:21:28.819Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-11-18T00:21:28.819Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-11-18T00:21:28.820Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-11-18T00:21:28.821Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-11-18T00:21:28.822Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-11-18T00:21:29.213Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211117161858-31976 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-11-18T00:21:29.213Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:21:29.214Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:21:29.215Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-11-18T00:21:29.215Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-11-18T00:21:29.215Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-11-18T00:21:29.226Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [acade1ad68d7] <==
	* {"level":"info","ts":"2021-11-18T00:20:01.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-11-18T00:20:01.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-11-18T00:20:01.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211117161858-31976 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-11-18T00:20:01.467Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:20:01.468Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-11-18T00:20:01.468Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-11-18T00:20:01.468Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-11-18T00:20:01.469Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-11-18T00:20:01.475Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:20:01.475Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:20:01.475Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:21:27.125Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-11-18T00:21:27.125Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20211117161858-31976","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/11/18 00:21:27 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/11/18 00:21:27 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-11-18T00:21:27.135Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2021-11-18T00:21:27.137Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:27.138Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-11-18T00:21:27.138Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20211117161858-31976","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  00:22:03 up 11 min,  0 users,  load average: 2.19, 2.16, 1.33
	Linux functional-20211117161858-31976 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [28d32fce294d] <==
	* I1118 00:22:00.602142       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I1118 00:22:00.602154       1 controller.go:85] Starting OpenAPI controller
	I1118 00:22:00.602163       1 naming_controller.go:291] Starting NamingConditionController
	I1118 00:22:00.602183       1 establishing_controller.go:76] Starting EstablishingController
	I1118 00:22:00.602191       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1118 00:22:00.602203       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1118 00:22:00.602234       1 crd_finalizer.go:266] Starting CRDFinalizer
	E1118 00:22:00.603790       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1118 00:22:00.604582       1 available_controller.go:491] Starting AvailableConditionController
	I1118 00:22:00.604608       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1118 00:22:00.612908       1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1118 00:22:00.612990       1 controller.go:83] Starting OpenAPI AggregationController
	I1118 00:22:00.613300       1 apf_controller.go:312] Starting API Priority and Fairness config controller
	I1118 00:22:00.700167       1 cache.go:39] Caches are synced for autoregister controller
	I1118 00:22:00.700536       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1118 00:22:00.701355       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1118 00:22:00.702165       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1118 00:22:00.705052       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1118 00:22:00.713446       1 apf_controller.go:317] Running API Priority and Fairness config worker
	I1118 00:22:00.735901       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1118 00:22:01.600071       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1118 00:22:01.600140       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1118 00:22:01.604087       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1118 00:22:03.066739       1 controller.go:611] quota admission added evaluator for: endpoints
	I1118 00:22:03.882455       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	
	* 
	* ==> kube-apiserver [e180853e4190] <==
	* I1118 00:21:37.794054       1 server.go:553] external host was not specified, using 192.168.49.2
	I1118 00:21:37.794682       1 server.go:161] Version: v1.22.3
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-controller-manager [57f34cc17d7a] <==
	* I1118 00:20:19.861174       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I1118 00:20:19.864808       1 shared_informer.go:247] Caches are synced for node 
	I1118 00:20:19.864840       1 range_allocator.go:172] Starting range CIDR allocator
	I1118 00:20:19.864843       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I1118 00:20:19.864848       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I1118 00:20:19.868661       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wbv29"
	I1118 00:20:19.873974       1 range_allocator.go:373] Set node functional-20211117161858-31976 PodCIDR to [10.244.0.0/24]
	I1118 00:20:19.885673       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-dnq6x"
	I1118 00:20:19.936687       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1118 00:20:19.938663       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-hk5kk"
	I1118 00:20:20.000246       1 shared_informer.go:247] Caches are synced for disruption 
	I1118 00:20:20.038779       1 disruption.go:371] Sending events to api server.
	I1118 00:20:20.051192       1 shared_informer.go:247] Caches are synced for expand 
	I1118 00:20:20.051310       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1118 00:20:20.051344       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1118 00:20:20.056678       1 shared_informer.go:247] Caches are synced for attach detach 
	I1118 00:20:20.071851       1 shared_informer.go:247] Caches are synced for resource quota 
	I1118 00:20:20.071858       1 shared_informer.go:247] Caches are synced for ephemeral 
	I1118 00:20:20.076620       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I1118 00:20:20.078475       1 shared_informer.go:247] Caches are synced for resource quota 
	I1118 00:20:20.080869       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-hk5kk"
	I1118 00:20:20.102042       1 shared_informer.go:247] Caches are synced for stateful set 
	I1118 00:20:20.488636       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1118 00:20:20.493195       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1118 00:20:20.493249       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [e34b718143e1] <==
	* I1118 00:22:04.244405       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I1118 00:22:04.244410       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1118 00:22:04.244422       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1118 00:22:04.244728       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	W1118 00:22:04.255994       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="functional-20211117161858-31976" does not exist
	I1118 00:22:04.276276       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1118 00:22:04.282445       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I1118 00:22:04.284819       1 shared_informer.go:247] Caches are synced for PV protection 
	I1118 00:22:04.290591       1 shared_informer.go:247] Caches are synced for expand 
	I1118 00:22:04.298317       1 shared_informer.go:247] Caches are synced for TTL 
	I1118 00:22:04.300567       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I1118 00:22:04.313914       1 shared_informer.go:247] Caches are synced for cronjob 
	I1118 00:22:04.318888       1 shared_informer.go:247] Caches are synced for node 
	I1118 00:22:04.318933       1 range_allocator.go:172] Starting range CIDR allocator
	I1118 00:22:04.318939       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I1118 00:22:04.318945       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I1118 00:22:04.322913       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I1118 00:22:04.323050       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I1118 00:22:04.332691       1 shared_informer.go:247] Caches are synced for namespace 
	I1118 00:22:04.333939       1 shared_informer.go:247] Caches are synced for crt configmap 
	I1118 00:22:04.338528       1 shared_informer.go:247] Caches are synced for service account 
	I1118 00:22:04.344468       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I1118 00:22:04.344502       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I1118 00:22:04.344518       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I1118 00:22:04.344526       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	
	* 
	* ==> kube-proxy [dec4850aecf1] <==
	* E1118 00:21:28.731178       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976": dial tcp 192.168.49.2:8441: connect: connection refused
	I1118 00:21:32.008634       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1118 00:21:32.008670       1 server_others.go:140] Detected node IP 192.168.49.2
	W1118 00:21:32.008696       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1118 00:21:34.319819       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1118 00:21:34.319893       1 server_others.go:212] Using iptables Proxier.
	I1118 00:21:34.319919       1 server_others.go:219] creating dualStackProxier for iptables.
	W1118 00:21:34.319928       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1118 00:21:34.321812       1 server.go:649] Version: v1.22.3
	I1118 00:21:34.323441       1 config.go:315] Starting service config controller
	I1118 00:21:34.323468       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1118 00:21:34.323734       1 config.go:224] Starting endpoint slice config controller
	I1118 00:21:34.323740       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1118 00:21:34.424028       1 shared_informer.go:247] Caches are synced for service config 
	I1118 00:21:34.424062       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1118 00:21:41.600538       1 trace.go:205] Trace[1902479152]: "iptables restore" (18-Nov-2021 00:21:39.544) (total time: 2056ms):
	Trace[1902479152]: [2.056179659s] [2.056179659s] END
	I1118 00:21:50.818216       1 trace.go:205] Trace[1871746744]: "iptables restore" (18-Nov-2021 00:21:48.331) (total time: 2486ms):
	Trace[1871746744]: [2.486481381s] [2.486481381s] END
	
	* 
	* ==> kube-proxy [e4efee81ca46] <==
	* I1118 00:20:21.651696       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1118 00:20:21.651759       1 server_others.go:140] Detected node IP 192.168.49.2
	W1118 00:20:21.651773       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1118 00:20:23.854981       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1118 00:20:23.855018       1 server_others.go:212] Using iptables Proxier.
	I1118 00:20:23.855027       1 server_others.go:219] creating dualStackProxier for iptables.
	W1118 00:20:23.855038       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1118 00:20:23.855321       1 server.go:649] Version: v1.22.3
	I1118 00:20:23.855763       1 config.go:224] Starting endpoint slice config controller
	I1118 00:20:23.855791       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1118 00:20:23.855808       1 config.go:315] Starting service config controller
	I1118 00:20:23.855814       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1118 00:20:23.956738       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1118 00:20:23.956854       1 shared_informer.go:247] Caches are synced for service config 
	I1118 00:20:47.183178       1 trace.go:205] Trace[886166706]: "iptables restore" (18-Nov-2021 00:20:45.117) (total time: 2065ms):
	Trace[886166706]: [2.065366873s] [2.065366873s] END
	I1118 00:21:09.711497       1 trace.go:205] Trace[1669630311]: "iptables restore" (18-Nov-2021 00:21:07.502) (total time: 2208ms):
	Trace[1669630311]: [2.208983227s] [2.208983227s] END
	
	* 
	* ==> kube-scheduler [250907264d34] <==
	* I1118 00:21:29.418722       1 serving.go:347] Generated self-signed cert in-memory
	W1118 00:21:31.997832       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1118 00:21:31.997895       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1118 00:21:31.997924       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1118 00:21:31.997933       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1118 00:21:32.011623       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1118 00:21:32.011790       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1118 00:21:32.012127       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1118 00:21:32.011814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1118 00:21:32.021075       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021368       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021634       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021713       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021793       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021855       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:21:32.021896       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I1118 00:21:32.112609       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1118 00:22:00.629682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	
	* 
	* ==> kube-scheduler [9b2f0c8b2294] <==
	* E1118 00:20:03.778476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:03.778649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1118 00:20:03.778756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1118 00:20:03.778356       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1118 00:20:03.778477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:03.778664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1118 00:20:03.778867       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1118 00:20:03.779132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1118 00:20:03.779200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:03.779292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1118 00:20:03.779420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1118 00:20:03.779535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:04.601872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1118 00:20:04.679677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1118 00:20:04.718435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1118 00:20:04.775960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1118 00:20:04.790187       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1118 00:20:04.862294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:04.865410       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1118 00:20:04.897363       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1118 00:20:04.930491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1118 00:20:06.875424       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I1118 00:21:27.208721       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1118 00:21:27.209099       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I1118 00:21:27.209128       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-11-18 00:19:19 UTC, end at Thu 2021-11-18 00:22:04 UTC. --
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.821657    6078 status_manager.go:601] "Failed to get status for pod" podUID=3c40bf658f457f3d925e48d646a29704 pod="kube-system/kube-apiserver-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.821881    6078 status_manager.go:601] "Failed to get status for pod" podUID=06052898487a9eef2760f89d323d2979 pod="kube-system/etcd-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:45.973843    6078 scope.go:110] "RemoveContainer" containerID="e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335"
	Nov 18 00:21:45 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:45.974382    6078 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20211117161858-31976_kube-system(3c40bf658f457f3d925e48d646a29704)\"" pod="kube-system/kube-apiserver-functional-20211117161858-31976" podUID=3c40bf658f457f3d925e48d646a29704
	Nov 18 00:21:46 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:46.980435    6078 scope.go:110] "RemoveContainer" containerID="e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335"
	Nov 18 00:21:46 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:46.981056    6078 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20211117161858-31976_kube-system(3c40bf658f457f3d925e48d646a29704)\"" pod="kube-system/kube-apiserver-functional-20211117161858-31976" podUID=3c40bf658f457f3d925e48d646a29704
	Nov 18 00:21:47 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:47.264527    6078 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:47 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:47.989152    6078 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-20211117161858-31976.16b87c15dfe682a4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-20211117161858-31976", UID:"3c40bf658f457f3d925e48d646a29704", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message
:"Back-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"functional-20211117161858-31976"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc05d8504782218a4, ext:4253908481, loc:(*time.Location)(0x77a8680)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc05d8504782218a4, ext:4253908481, loc:(*time.Location)(0x77a8680)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Nov 18 00:21:50 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:50.465719    6078 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:54 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:54.327426    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?resourceVersion=0&timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:54 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:54.327891    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:54 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:54.328323    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:54 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:54.328687    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:54 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:54.328969    6078 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211117161858-31976\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211117161858-31976?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:54 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:54.329010    6078 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.820351    6078 status_manager.go:601] "Failed to get status for pod" podUID=0ad7422ab14ae2d4b971f1822a1ff8ef pod="kube-system/kube-scheduler-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.820720    6078 status_manager.go:601] "Failed to get status for pod" podUID=3c40bf658f457f3d925e48d646a29704 pod="kube-system/kube-apiserver-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.820901    6078 status_manager.go:601] "Failed to get status for pod" podUID=06052898487a9eef2760f89d323d2979 pod="kube-system/etcd-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.821052    6078 status_manager.go:601] "Failed to get status for pod" podUID=9b9ccef0-fccc-43f7-8dec-952d07564964 pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.821220    6078 status_manager.go:601] "Failed to get status for pod" podUID=453e3252-c5a8-48b4-893b-0496a8ed4dec pod="kube-system/kube-proxy-wbv29" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wbv29\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.821434    6078 status_manager.go:601] "Failed to get status for pod" podUID=07d38b3c32289fcb168a5eedbb42a060 pod="kube-system/kube-controller-manager-functional-20211117161858-31976" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20211117161858-31976\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:55 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:55.821648    6078 status_manager.go:601] "Failed to get status for pod" podUID=67a186a3-f954-4960-bb9e-57d18527dbc7 pod="kube-system/coredns-78fcd69978-dnq6x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-dnq6x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Nov 18 00:21:56 functional-20211117161858-31976 kubelet[6078]: E1118 00:21:56.870506    6078 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211117161858-31976?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Nov 18 00:21:57 functional-20211117161858-31976 kubelet[6078]: I1118 00:21:57.819518    6078 scope.go:110] "RemoveContainer" containerID="e180853e419038951cae72f6dcf958ed9e63196aeb434f87f93d71eb7f7bd335"
	Nov 18 00:22:00 functional-20211117161858-31976 kubelet[6078]: E1118 00:22:00.623024    6078 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> storage-provisioner [422236a5e675] <==
	* I1118 00:21:36.639683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1118 00:21:36.650400       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1118 00:21:36.650449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1118 00:21:40.111980       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:44.350512       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:47.946818       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:50.997974       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:54.018609       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:21:57.669218       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1118 00:22:00.682650       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: endpoints "k8s.io-minikube-hostpath" is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
	I1118 00:22:03.068026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1118 00:22:03.068173       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20211117161858-31976_4fef0d5b-bc84-456c-a1c3-fda05fbd6e27!
	I1118 00:22:03.068434       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32b2b3be-e39d-44de-bb2d-5d1067722fdc", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20211117161858-31976_4fef0d5b-bc84-456c-a1c3-fda05fbd6e27 became leader
	I1118 00:22:03.169335       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20211117161858-31976_4fef0d5b-bc84-456c-a1c3-fda05fbd6e27!
	
	* 
	* ==> storage-provisioner [ea850ca3cc61] <==
	* I1118 00:20:52.931610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1118 00:20:52.938911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1118 00:20:52.939029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1118 00:20:52.953253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1118 00:20:52.953292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32b2b3be-e39d-44de-bb2d-5d1067722fdc", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20211117161858-31976_1183ee1f-9cef-48cf-8de5-e4dbd61f5221 became leader
	I1118 00:20:52.953495       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20211117161858-31976_1183ee1f-9cef-48cf-8de5-e4dbd61f5221!
	I1118 00:20:53.054549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20211117161858-31976_1183ee1f-9cef-48cf-8de5-e4dbd61f5221!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20211117161858-31976 -n functional-20211117161858-31976
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20211117161858-31976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20211117161858-31976 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20211117161858-31976 describe pod : exit status 1 (44.464431ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20211117161858-31976 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (13.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1901742650.exe start -p running-upgrade-20211117170725-31976 --memory=2200 --vm-driver=docker 
E1117 17:07:29.139426   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:49.625558   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:08:30.588097   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1901742650.exe start -p running-upgrade-20211117170725-31976 --memory=2200 --vm-driver=docker : exit status 70 (1m44.339552271s)

                                                
                                                
-- stdout --
	* [running-upgrade-20211117170725-31976] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2148005027
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "running-upgrade-20211117170725-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20211117170725-31976", then "minikube start -p running-upgrade-20211117170725-31976 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.88 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 30.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.31 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 219.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 240.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 349.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 394.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 504.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiBE1117 17:07:34.571510   42508 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1901742650.exe start -p running-upgrade-20211117170725-31976 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1901742650.exe start -p running-upgrade-20211117170725-31976 --memory=2200 --vm-driver=docker : exit status 70 (42.078030285s)

                                                
                                                
-- stdout --
	* [running-upgrade-20211117170725-31976] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig798656051
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-20211117170725-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "running-upgrade-20211117170725-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20211117170725-31976", then "minikube start -p running-upgrade-20211117170725-31976 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 17:09:14.633284   42896 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1901742650.exe start -p running-upgrade-20211117170725-31976 --memory=2200 --vm-driver=docker 
E1117 17:09:58.086319   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 17:10:14.958281   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1901742650.exe start -p running-upgrade-20211117170725-31976 --memory=2200 --vm-driver=docker : exit status 70 (42.014791586s)

                                                
                                                
-- stdout --
	* [running-upgrade-20211117170725-31976] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1527647170
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-20211117170725-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "running-upgrade-20211117170725-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20211117170725-31976", then "minikube start -p running-upgrade-20211117170725-31976 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 17:09:58.520140   43275 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20211117170725-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
panic.go:642: *** TestRunningBinaryUpgrade FAILED at 2021-11-17 17:10:38.074568 -0800 PST m=+3609.869995950
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20211117170725-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect running-upgrade-20211117170725-31976: exit status 1 (219.513825ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20211117170725-31976 -n running-upgrade-20211117170725-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20211117170725-31976 -n running-upgrade-20211117170725-31976: exit status 7 (230.057389ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 17:10:38.523220   43878 status.go:247] status error: host: state: unknown state "running-upgrade-20211117170725-31976": docker container inspect running-upgrade-20211117170725-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-20211117170725-31976" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-20211117170725-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20211117170725-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20211117170725-31976: (1.258615786s)
--- FAIL: TestRunningBinaryUpgrade (194.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (110.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117170535-31976 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker 
E1117 17:06:14.139708   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117170535-31976 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker : (1m8.432133651s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117170535-31976
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117170535-31976: (6.621235546s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20211117170535-31976 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20211117170535-31976 status --format={{.Host}}: exit status 7 (212.325864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117170535-31976 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=docker 
E1117 17:07:08.605490   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:08.610992   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:08.621941   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:08.645028   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:08.690986   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:08.773436   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:08.940965   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:09.269306   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:09.919272   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:11.207777   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:13.777379   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:18.898601   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:07:20.406598   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117170535-31976 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=docker : exit status 80 (33.613468295s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211117170535-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20211117170535-31976 in cluster kubernetes-upgrade-20211117170535-31976
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20211117170535-31976" ...
	* docker "kubernetes-upgrade-20211117170535-31976" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:06:50.775270   42213 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:06:50.775448   42213 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:06:50.775454   42213 out.go:310] Setting ErrFile to fd 2...
	I1117 17:06:50.775458   42213 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:06:50.775549   42213 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:06:50.775843   42213 out.go:304] Setting JSON to false
	I1117 17:06:50.812010   42213 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11185,"bootTime":1637186425,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:06:50.812131   42213 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:06:50.843714   42213 out.go:176] * [kubernetes-upgrade-20211117170535-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:06:50.843810   42213 notify.go:174] Checking for updates...
	I1117 17:06:50.844089   42213 preload.go:305] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I1117 17:06:50.898832   42213 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:06:50.898973   42213 preload.go:305] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I1117 17:06:50.924089   42213 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:06:50.950286   42213 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:06:50.976317   42213 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:06:50.976754   42213 config.go:176] Loaded profile config "kubernetes-upgrade-20211117170535-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 17:06:50.977101   42213 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 17:06:51.101331   42213 docker.go:132] docker version: linux-20.10.6
	I1117 17:06:51.101450   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:51.327924   42213 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:63 SystemTime:2021-11-18 01:06:51.239695789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:51.375238   42213 out.go:176] * Using the docker driver based on existing profile
	I1117 17:06:51.375302   42213 start.go:280] selected driver: docker
	I1117 17:06:51.375312   42213 start.go:775] validating driver "docker" against &{Name:kubernetes-upgrade-20211117170535-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117170535-31976 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 17:06:51.375441   42213 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 17:06:51.377948   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:51.612929   42213 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:63 SystemTime:2021-11-18 01:06:51.521850889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:51.613064   42213 cni.go:93] Creating CNI manager for ""
	I1117 17:06:51.613077   42213 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 17:06:51.613086   42213 start_flags.go:282] config:
	{Name:kubernetes-upgrade-20211117170535-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:kubernetes-upgrade-20211117170535-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 17:06:51.660640   42213 out.go:176] * Starting control plane node kubernetes-upgrade-20211117170535-31976 in cluster kubernetes-upgrade-20211117170535-31976
	I1117 17:06:51.660681   42213 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 17:06:51.707437   42213 out.go:176] * Pulling base image ...
	I1117 17:06:51.707483   42213 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 17:06:51.707507   42213 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 17:06:51.707535   42213 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 17:06:51.707557   42213 cache.go:57] Caching tarball of preloaded images
	I1117 17:06:51.707722   42213 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 17:06:51.707736   42213 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4-rc.0 on docker
	I1117 17:06:51.708438   42213 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/kubernetes-upgrade-20211117170535-31976/config.json ...
	I1117 17:06:51.865954   42213 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 17:06:51.865969   42213 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 17:06:51.865982   42213 cache.go:206] Successfully downloaded all kic artifacts
	I1117 17:06:51.866041   42213 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117170535-31976: {Name:mk9072703b78acf3b7213aadf906ddf9902cea7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 17:06:51.866133   42213 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117170535-31976" in 69.949µs
	I1117 17:06:51.866163   42213 start.go:93] Skipping create...Using existing machine configuration
	I1117 17:06:51.866172   42213 fix.go:55] fixHost starting: 
	I1117 17:06:51.866445   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	I1117 17:06:52.002809   42213 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117170535-31976: state=Stopped err=<nil>
	W1117 17:06:52.002849   42213 fix.go:134] unexpected machine state, will restart: <nil>
	I1117 17:06:52.029701   42213 out.go:176] * Restarting existing docker container for "kubernetes-upgrade-20211117170535-31976" ...
	I1117 17:06:52.029823   42213 cli_runner.go:115] Run: docker start kubernetes-upgrade-20211117170535-31976
	W1117 17:06:52.757923   42213 cli_runner.go:162] docker start kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:52.758068   42213 cli_runner.go:115] Run: docker inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:06:52.888984   42213 cli_runner.go:162] docker inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:06:52.889028   42213 errors.go:82] Failed to get postmortem inspect. docker inspect kubernetes-upgrade-20211117170535-31976 :docker inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:52.889163   42213 cli_runner.go:115] Run: docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.008402   42213 cli_runner.go:162] docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:06:53.008428   42213 errors.go:89] Failed to get postmortem logs. docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976 :docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:53.008510   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:53.182243   42213 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: Bad response from Docker engine] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-pl
ugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:53.182342   42213 errors.go:98] postmortem docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] Experiment
alBuild:false ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: Bad response from Docker engine] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/d
ocker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:53.182475   42213 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117170535-31976] to gather additional debugging logs...
	I1117 17:06:53.182492   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.318601   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:53.318626   42213 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117170535-31976]: docker network inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:53.318637   42213 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117170535-31976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I1117 17:06:53.318731   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:53.488546   42213 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: Bad response from Docker engine] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-pl
ugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:53.489139   42213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.602437   42213 cli_runner.go:162] docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:53.602544   42213 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 17:06:53.602617   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.725618   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:53.725714   42213 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:54.003912   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:54.124594   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:54.124670   42213 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:54.674331   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:54.797671   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:54.797760   42213 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:55.454169   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:55.574394   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:06:55.574474   42213 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:06:55.574495   42213 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:55.574506   42213 fix.go:57] fixHost completed within 3.708253072s
	I1117 17:06:55.574515   42213 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117170535-31976", held for 3.708290894s
	W1117 17:06:55.574528   42213 start.go:532] error starting host: inspecting NetworkSettings.Networks: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 17:06:55.574629   42213 out.go:241] ! StartHost failed, but will try again: inspecting NetworkSettings.Networks: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! StartHost failed, but will try again: inspecting NetworkSettings.Networks: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 17:06:55.574637   42213 start.go:547] Will try again in 5 seconds ...
	I1117 17:07:00.574920   42213 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117170535-31976: {Name:mk9072703b78acf3b7213aadf906ddf9902cea7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 17:07:00.575053   42213 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117170535-31976" in 98.876µs
	I1117 17:07:00.575074   42213 start.go:93] Skipping create...Using existing machine configuration
	I1117 17:07:00.575078   42213 fix.go:55] fixHost starting: 
	I1117 17:07:00.575377   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:00.698108   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:00.698157   42213 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117170535-31976: state= err=unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.698170   42213 fix.go:113] machineExists: false. err=machine does not exist
	I1117 17:07:00.725135   42213 out.go:176] * docker "kubernetes-upgrade-20211117170535-31976" container is missing, will recreate.
	I1117 17:07:00.725189   42213 delete.go:124] DEMOLISHING kubernetes-upgrade-20211117170535-31976 ...
	I1117 17:07:00.725392   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:00.861868   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	W1117 17:07:00.861911   42213 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.861936   42213 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.862364   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:00.978129   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:00.978172   42213 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117170535-31976, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.978260   42213 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976
	W1117 17:07:01.092880   42213 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:01.092911   42213 kic.go:360] could not find the container kubernetes-upgrade-20211117170535-31976 to remove it. will try anyways
	I1117 17:07:01.092996   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:01.209201   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	W1117 17:07:01.209238   42213 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:01.209322   42213 cli_runner.go:115] Run: docker exec --privileged -t kubernetes-upgrade-20211117170535-31976 /bin/bash -c "sudo init 0"
	W1117 17:07:01.322461   42213 cli_runner.go:162] docker exec --privileged -t kubernetes-upgrade-20211117170535-31976 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 17:07:01.322486   42213 oci.go:651] error shutdown kubernetes-upgrade-20211117170535-31976: docker exec --privileged -t kubernetes-upgrade-20211117170535-31976 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:02.326114   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:02.442126   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:02.442165   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:02.442173   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:02.442195   42213 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:02.912951   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:03.031243   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:03.031288   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:03.031301   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:03.031332   42213 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:03.924515   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:04.042733   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:04.042773   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:04.042781   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:04.042805   42213 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:04.679507   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:04.801314   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:04.801351   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:04.801358   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:04.801382   42213 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:05.916094   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:06.030465   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:06.030502   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:06.030512   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:06.030543   42213 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:07.552029   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:07.669793   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:07.669838   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:07.669855   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:07.669885   42213 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:10.714642   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:10.845215   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:10.845252   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:10.845259   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:10.845282   42213 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:16.627792   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:16.746605   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:16.746646   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:16.746653   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:16.746688   42213 oci.go:87] couldn't shut down kubernetes-upgrade-20211117170535-31976 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	 
	I1117 17:07:16.746769   42213 cli_runner.go:115] Run: docker rm -f -v kubernetes-upgrade-20211117170535-31976
	W1117 17:07:16.862743   42213 cli_runner.go:162] docker rm -f -v kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:16.862866   42213 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976
	W1117 17:07:16.978253   42213 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:16.978363   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 17:07:17.089943   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 17:07:17.090053   42213 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117170535-31976] to gather additional debugging logs...
	I1117 17:07:17.090068   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:07:17.204149   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:17.204173   42213 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117170535-31976]: docker network inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:17.204187   42213 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117170535-31976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 17:07:17.204202   42213 network_create.go:284] Error inspecting docker network kubernetes-upgrade-20211117170535-31976: docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 17:07:17.204558   42213 delete.go:139] delete failed (probably ok) <nil>
	I1117 17:07:17.204565   42213 fix.go:120] Sleeping 1 second for extra luck!
	I1117 17:07:18.207305   42213 start.go:126] createHost starting for "" (driver="docker")
	I1117 17:07:18.282955   42213 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 17:07:18.283137   42213 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117170535-31976" (driver="docker")
	I1117 17:07:18.283183   42213 client.go:168] LocalClient.Create starting
	I1117 17:07:18.283353   42213 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem
	I1117 17:07:18.283604   42213 main.go:130] libmachine: Decoding PEM data...
	I1117 17:07:18.283647   42213 main.go:130] libmachine: Parsing certificate...
	I1117 17:07:18.283801   42213 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem
	I1117 17:07:18.304803   42213 main.go:130] libmachine: Decoding PEM data...
	I1117 17:07:18.304837   42213 main.go:130] libmachine: Parsing certificate...
	I1117 17:07:18.305751   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 17:07:18.423985   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 17:07:18.424097   42213 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117170535-31976] to gather additional debugging logs...
	I1117 17:07:18.424116   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:07:18.536585   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:18.536611   42213 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117170535-31976]: docker network inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.536632   42213 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117170535-31976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I1117 17:07:18.536716   42213 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 17:07:18.649614   42213 cli_runner.go:162] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 17:07:18.649750   42213 network_create.go:254] running [docker network inspect bridge] to gather additional debugging logs...
	I1117 17:07:18.649784   42213 cli_runner.go:115] Run: docker network inspect bridge
	W1117 17:07:18.764966   42213 cli_runner.go:162] docker network inspect bridge returned with exit code 1
	I1117 17:07:18.764992   42213 network_create.go:257] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.765003   42213 network_create.go:259] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 17:07:18.765010   42213 network_create.go:75] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.765230   42213 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005a6110] misses:0}
	I1117 17:07:18.765255   42213 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 17:07:18.765271   42213 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
	I1117 17:07:18.765349   42213 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976
	W1117 17:07:18.877964   42213 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	E1117 17:07:18.878015   42213 network_create.go:95] error while trying to create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24: create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 17:07:18.878162   42213 out.go:241] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 17:07:18.878277   42213 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	W1117 17:07:18.994646   42213 cli_runner.go:162] docker ps -a --format {{.Names}} returned with exit code 1
	W1117 17:07:18.994676   42213 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.994769   42213 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true
	W1117 17:07:19.109297   42213 cli_runner.go:162] docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I1117 17:07:19.109339   42213 client.go:171] LocalClient.Create took 826.13022ms
	I1117 17:07:21.115680   42213 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 17:07:21.115838   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:21.230804   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:21.230882   42213 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:21.410252   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:21.526439   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:21.526527   42213 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:21.859203   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:21.979700   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:21.979785   42213 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:22.440613   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:22.555457   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:07:22.555546   42213 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:07:22.555575   42213 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:22.555599   42213 start.go:129] duration metric: createHost completed in 4.34814707s
	I1117 17:07:22.555658   42213 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 17:07:22.555709   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:22.667949   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:22.668030   42213 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:22.865551   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:22.979098   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:22.979187   42213 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:23.282713   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:23.402324   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:23.402401   42213 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:24.065859   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:24.180619   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:07:24.180700   42213 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:07:24.180723   42213 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:24.180734   42213 fix.go:57] fixHost completed within 23.60512594s
	I1117 17:07:24.180744   42213 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117170535-31976", held for 23.605153823s
	W1117 17:07:24.180884   42213 out.go:241] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117170535-31976" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117170535-31976 container: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117170535-31976" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117170535-31976 container: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 17:07:24.207631   42213 out.go:176] 
	W1117 17:07:24.207883   42213 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117170535-31976 container: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117170535-31976 container: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:07:24.207899   42213 out.go:241] * 
	* 
	W1117 17:07:24.208954   42213 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 17:07:24.307525   42213 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117170535-31976 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=docker  : exit status 80
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20211117170535-31976 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20211117170535-31976 version --output=json: exit status 1 (40.274404ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-20211117170535-31976" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:642: *** TestKubernetesUpgrade FAILED at 2021-11-17 17:07:24.379121 -0800 PST m=+3416.177109272
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20211117170535-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20211117170535-31976: exit status 1 (117.735288ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20211117170535-31976 -n kubernetes-upgrade-20211117170535-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20211117170535-31976 -n kubernetes-upgrade-20211117170535-31976: exit status 7 (156.285854ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 17:07:24.652058   42478 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20211117170535-31976" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20211117170535-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20211117170535-31976
--- FAIL: TestKubernetesUpgrade (110.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (232.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1704274414.exe start -p stopped-upgrade-20211117170633-31976 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1704274414.exe start -p stopped-upgrade-20211117170633-31976 --memory=2200 --vm-driver=docker : exit status 70 (2m24.993119099s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20211117170633-31976] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1694200434
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5945MB available) ...
	! StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds
	* docker "stopped-upgrade-20211117170633-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20211117170633-31976", then "minikube start -p stopped-upgrade-20211117170633-31976 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 41.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 62.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 107.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 226.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 334.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 487.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 496.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1704274414.exe start -p stopped-upgrade-20211117170633-31976 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1704274414.exe start -p stopped-upgrade-20211117170633-31976 --memory=2200 --vm-driver=docker : exit status 70 (42.511018983s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20211117170633-31976] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1931650060
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20211117170633-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20211117170633-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20211117170633-31976", then "minikube start -p stopped-upgrade-20211117170633-31976 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 17:09:03.005406   42785 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1704274414.exe start -p stopped-upgrade-20211117170633-31976 --memory=2200 --vm-driver=docker 
E1117 17:09:52.520381   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1704274414.exe start -p stopped-upgrade-20211117170633-31976 --memory=2200 --vm-driver=docker : exit status 70 (42.338819502s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20211117170633-31976] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2650525672
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20211117170633-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20211117170633-31976" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20211117170633-31976", then "minikube start -p stopped-upgrade-20211117170633-31976 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 17:09:46.761296   43167 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117170633-31976 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (232.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20211117170633-31976
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p stopped-upgrade-20211117170633-31976: exit status 80 (477.475526ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-------------------------------------------------------------|-------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	|  Command   |                            Args                             |                  Profile                  |   User   | Version |          Start Time           |           End Time            |
	|------------|-------------------------------------------------------------|-------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| stop       | -p                                                          | mount-start-2-20211117163347-31976        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:36:13 PST | Wed, 17 Nov 2021 16:36:31 PST |
	|            | mount-start-2-20211117163347-31976                          |                                           |          |         |                               |                               |
	| start      | -p                                                          | mount-start-2-20211117163347-31976        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:36:31 PST | Wed, 17 Nov 2021 16:37:19 PST |
	|            | mount-start-2-20211117163347-31976                          |                                           |          |         |                               |                               |
	| -p         | mount-start-2-20211117163347-31976                          | mount-start-2-20211117163347-31976        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:37:19 PST | Wed, 17 Nov 2021 16:37:20 PST |
	|            | ssh ls /minikube-host                                       |                                           |          |         |                               |                               |
	| delete     | -p                                                          | mount-start-2-20211117163347-31976        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:37:20 PST | Wed, 17 Nov 2021 16:37:33 PST |
	|            | mount-start-2-20211117163347-31976                          |                                           |          |         |                               |                               |
	| delete     | -p                                                          | mount-start-1-20211117163347-31976        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:37:33 PST | Wed, 17 Nov 2021 16:37:34 PST |
	|            | mount-start-1-20211117163347-31976                          |                                           |          |         |                               |                               |
	| start      | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:37:34 PST | Wed, 17 Nov 2021 16:41:11 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | --wait=true --memory=2200                                   |                                           |          |         |                               |                               |
	|            | --nodes=2 -v=8                                              |                                           |          |         |                               |                               |
	|            | --alsologtostderr                                           |                                           |          |         |                               |                               |
	|            | --driver=docker                                             |                                           |          |         |                               |                               |
	| kubectl    | -p multinode-20211117163734-31976 -- apply -f               | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:12 PST | Wed, 17 Nov 2021 16:41:14 PST |
	|            | ./testdata/multinodes/multinode-pod-dns-test.yaml           |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:14 PST | Wed, 17 Nov 2021 16:41:17 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- rollout status                                           |                                           |          |         |                               |                               |
	|            | deployment/busybox                                          |                                           |          |         |                               |                               |
	| kubectl    | -p multinode-20211117163734-31976                           | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:17 PST | Wed, 17 Nov 2021 16:41:17 PST |
	|            | -- get pods -o                                              |                                           |          |         |                               |                               |
	|            | jsonpath='{.items[*].status.podIP}'                         |                                           |          |         |                               |                               |
	| kubectl    | -p multinode-20211117163734-31976                           | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:17 PST | Wed, 17 Nov 2021 16:41:17 PST |
	|            | -- get pods -o                                              |                                           |          |         |                               |                               |
	|            | jsonpath='{.items[*].metadata.name}'                        |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:17 PST | Wed, 17 Nov 2021 16:41:17 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-55jx9 --                                 |                                           |          |         |                               |                               |
	|            | nslookup kubernetes.io                                      |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:17 PST | Wed, 17 Nov 2021 16:41:18 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-g894g --                                 |                                           |          |         |                               |                               |
	|            | nslookup kubernetes.io                                      |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:18 PST | Wed, 17 Nov 2021 16:41:18 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-55jx9 --                                 |                                           |          |         |                               |                               |
	|            | nslookup kubernetes.default                                 |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:18 PST | Wed, 17 Nov 2021 16:41:18 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-g894g --                                 |                                           |          |         |                               |                               |
	|            | nslookup kubernetes.default                                 |                                           |          |         |                               |                               |
	| kubectl    | -p multinode-20211117163734-31976                           | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:18 PST | Wed, 17 Nov 2021 16:41:18 PST |
	|            | -- exec busybox-84b6686758-55jx9                            |                                           |          |         |                               |                               |
	|            | -- nslookup                                                 |                                           |          |         |                               |                               |
	|            | kubernetes.default.svc.cluster.local                        |                                           |          |         |                               |                               |
	| kubectl    | -p multinode-20211117163734-31976                           | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:18 PST | Wed, 17 Nov 2021 16:41:18 PST |
	|            | -- exec busybox-84b6686758-g894g                            |                                           |          |         |                               |                               |
	|            | -- nslookup                                                 |                                           |          |         |                               |                               |
	|            | kubernetes.default.svc.cluster.local                        |                                           |          |         |                               |                               |
	| kubectl    | -p multinode-20211117163734-31976                           | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:18 PST | Wed, 17 Nov 2021 16:41:18 PST |
	|            | -- get pods -o                                              |                                           |          |         |                               |                               |
	|            | jsonpath='{.items[*].metadata.name}'                        |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:19 PST | Wed, 17 Nov 2021 16:41:19 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-55jx9                                    |                                           |          |         |                               |                               |
	|            | -- sh -c nslookup                                           |                                           |          |         |                               |                               |
	|            | host.minikube.internal | awk                                |                                           |          |         |                               |                               |
	|            | 'NR==5' | cut -d' ' -f3                                     |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:19 PST | Wed, 17 Nov 2021 16:41:19 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-55jx9 -- sh                              |                                           |          |         |                               |                               |
	|            | -c ping -c 1 192.168.65.2                                   |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:19 PST | Wed, 17 Nov 2021 16:41:19 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-g894g                                    |                                           |          |         |                               |                               |
	|            | -- sh -c nslookup                                           |                                           |          |         |                               |                               |
	|            | host.minikube.internal | awk                                |                                           |          |         |                               |                               |
	|            | 'NR==5' | cut -d' ' -f3                                     |                                           |          |         |                               |                               |
	| kubectl    | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:19 PST | Wed, 17 Nov 2021 16:41:19 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -- exec                                                     |                                           |          |         |                               |                               |
	|            | busybox-84b6686758-g894g -- sh                              |                                           |          |         |                               |                               |
	|            | -c ping -c 1 192.168.65.2                                   |                                           |          |         |                               |                               |
	| node       | add -p                                                      | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:41:19 PST | Wed, 17 Nov 2021 16:43:05 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | -v 3 --alsologtostderr                                      |                                           |          |         |                               |                               |
	| profile    | list --output json                                          | minikube                                  | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:07 PST | Wed, 17 Nov 2021 16:43:07 PST |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:09 PST | Wed, 17 Nov 2021 16:43:10 PST |
	|            | cp testdata/cp-test.txt                                     |                                           |          |         |                               |                               |
	|            | /home/docker/cp-test.txt                                    |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:10 PST | Wed, 17 Nov 2021 16:43:10 PST |
	|            | ssh sudo cat                                                |                                           |          |         |                               |                               |
	|            | /home/docker/cp-test.txt                                    |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976 cp testdata/cp-test.txt      | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:10 PST | Wed, 17 Nov 2021 16:43:11 PST |
	|            | multinode-20211117163734-31976-m02:/home/docker/cp-test.txt |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:11 PST | Wed, 17 Nov 2021 16:43:11 PST |
	|            | ssh -n                                                      |                                           |          |         |                               |                               |
	|            | multinode-20211117163734-31976-m02                          |                                           |          |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                           |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976 cp testdata/cp-test.txt      | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:11 PST | Wed, 17 Nov 2021 16:43:12 PST |
	|            | multinode-20211117163734-31976-m03:/home/docker/cp-test.txt |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:12 PST | Wed, 17 Nov 2021 16:43:13 PST |
	|            | ssh -n                                                      |                                           |          |         |                               |                               |
	|            | multinode-20211117163734-31976-m03                          |                                           |          |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                           |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:13 PST | Wed, 17 Nov 2021 16:43:22 PST |
	|            | node stop m03                                               |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:43:25 PST | Wed, 17 Nov 2021 16:44:17 PST |
	|            | node start m03                                              |                                           |          |         |                               |                               |
	|            | --alsologtostderr                                           |                                           |          |         |                               |                               |
	| stop       | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:44:18 PST | Wed, 17 Nov 2021 16:44:58 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	| start      | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:44:58 PST | Wed, 17 Nov 2021 16:48:28 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | --wait=true -v=8                                            |                                           |          |         |                               |                               |
	|            | --alsologtostderr                                           |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:48:28 PST | Wed, 17 Nov 2021 16:48:43 PST |
	|            | node delete m03                                             |                                           |          |         |                               |                               |
	| -p         | multinode-20211117163734-31976                              | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:48:46 PST | Wed, 17 Nov 2021 16:49:21 PST |
	|            | stop                                                        |                                           |          |         |                               |                               |
	| start      | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:49:21 PST | Wed, 17 Nov 2021 16:51:49 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	|            | --wait=true -v=8                                            |                                           |          |         |                               |                               |
	|            | --alsologtostderr                                           |                                           |          |         |                               |                               |
	|            | --driver=docker                                             |                                           |          |         |                               |                               |
	| start      | -p                                                          | multinode-20211117163734-31976-m03        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:51:52 PST | Wed, 17 Nov 2021 16:53:11 PST |
	|            | multinode-20211117163734-31976-m03                          |                                           |          |         |                               |                               |
	|            | --driver=docker                                             |                                           |          |         |                               |                               |
	| delete     | -p                                                          | multinode-20211117163734-31976-m03        | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:53:11 PST | Wed, 17 Nov 2021 16:53:27 PST |
	|            | multinode-20211117163734-31976-m03                          |                                           |          |         |                               |                               |
	| delete     | -p                                                          | multinode-20211117163734-31976            | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:53:27 PST | Wed, 17 Nov 2021 16:53:51 PST |
	|            | multinode-20211117163734-31976                              |                                           |          |         |                               |                               |
	| start      | -p                                                          | test-preload-20211117165351-31976         | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:53:51 PST | Wed, 17 Nov 2021 16:56:39 PST |
	|            | test-preload-20211117165351-31976                           |                                           |          |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                             |                                           |          |         |                               |                               |
	|            | --wait=true --preload=false                                 |                                           |          |         |                               |                               |
	|            | --driver=docker                                             |                                           |          |         |                               |                               |
	|            | --kubernetes-version=v1.17.0                                |                                           |          |         |                               |                               |
	| ssh        | -p                                                          | test-preload-20211117165351-31976         | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:56:39 PST | Wed, 17 Nov 2021 16:56:42 PST |
	|            | test-preload-20211117165351-31976                           |                                           |          |         |                               |                               |
	|            | -- docker pull busybox                                      |                                           |          |         |                               |                               |
	| start      | -p                                                          | test-preload-20211117165351-31976         | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:56:43 PST | Wed, 17 Nov 2021 16:57:36 PST |
	|            | test-preload-20211117165351-31976                           |                                           |          |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                             |                                           |          |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker                            |                                           |          |         |                               |                               |
	|            | --kubernetes-version=v1.17.3                                |                                           |          |         |                               |                               |
	| ssh        | -p                                                          | test-preload-20211117165351-31976         | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:57:36 PST | Wed, 17 Nov 2021 16:57:37 PST |
	|            | test-preload-20211117165351-31976                           |                                           |          |         |                               |                               |
	|            | -- docker images                                            |                                           |          |         |                               |                               |
	| delete     | -p                                                          | test-preload-20211117165351-31976         | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:57:37 PST | Wed, 17 Nov 2021 16:57:50 PST |
	|            | test-preload-20211117165351-31976                           |                                           |          |         |                               |                               |
	| start      | -p                                                          | scheduled-stop-20211117165750-31976       | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:57:50 PST | Wed, 17 Nov 2021 16:59:05 PST |
	|            | scheduled-stop-20211117165750-31976                         |                                           |          |         |                               |                               |
	|            | --memory=2048 --driver=docker                               |                                           |          |         |                               |                               |
	| stop       | -p                                                          | scheduled-stop-20211117165750-31976       | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:59:06 PST | Wed, 17 Nov 2021 16:59:06 PST |
	|            | scheduled-stop-20211117165750-31976                         |                                           |          |         |                               |                               |
	|            | --cancel-scheduled                                          |                                           |          |         |                               |                               |
	| stop       | -p                                                          | scheduled-stop-20211117165750-31976       | jenkins  | v1.24.0 | Wed, 17 Nov 2021 16:59:33 PST | Wed, 17 Nov 2021 17:00:05 PST |
	|            | scheduled-stop-20211117165750-31976                         |                                           |          |         |                               |                               |
	|            | --schedule 15s                                              |                                           |          |         |                               |                               |
	| delete     | -p                                                          | scheduled-stop-20211117165750-31976       | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:00:18 PST | Wed, 17 Nov 2021 17:00:24 PST |
	|            | scheduled-stop-20211117165750-31976                         |                                           |          |         |                               |                               |
	| start      | -p                                                          | skaffold-20211117170024-31976             | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:00:26 PST | Wed, 17 Nov 2021 17:01:39 PST |
	|            | skaffold-20211117170024-31976                               |                                           |          |         |                               |                               |
	|            | --memory=2600 --driver=docker                               |                                           |          |         |                               |                               |
	| docker-env | --shell none -p                                             | skaffold-20211117170024-31976             | skaffold | v1.24.0 | Wed, 17 Nov 2021 17:01:40 PST | Wed, 17 Nov 2021 17:01:41 PST |
	|            | skaffold-20211117170024-31976                               |                                           |          |         |                               |                               |
	|            | --user=skaffold                                             |                                           |          |         |                               |                               |
	| delete     | -p                                                          | skaffold-20211117170024-31976             | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:02:18 PST | Wed, 17 Nov 2021 17:02:32 PST |
	|            | skaffold-20211117170024-31976                               |                                           |          |         |                               |                               |
	| delete     | -p                                                          | insufficient-storage-20211117170232-31976 | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:03:22 PST | Wed, 17 Nov 2021 17:03:34 PST |
	|            | insufficient-storage-20211117170232-31976                   |                                           |          |         |                               |                               |
	| delete     | -p                                                          | flannel-20211117170334-31976              | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:03:34 PST | Wed, 17 Nov 2021 17:03:35 PST |
	|            | flannel-20211117170334-31976                                |                                           |          |         |                               |                               |
	| start      | -p                                                          | offline-docker-20211117170334-31976       | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:03:34 PST | Wed, 17 Nov 2021 17:05:19 PST |
	|            | offline-docker-20211117170334-31976                         |                                           |          |         |                               |                               |
	|            | --alsologtostderr -v=1                                      |                                           |          |         |                               |                               |
	|            | --memory=2048 --wait=true                                   |                                           |          |         |                               |                               |
	|            | --driver=docker                                             |                                           |          |         |                               |                               |
	| delete     | -p                                                          | offline-docker-20211117170334-31976       | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:05:19 PST | Wed, 17 Nov 2021 17:05:35 PST |
	|            | offline-docker-20211117170334-31976                         |                                           |          |         |                               |                               |
	| start      | -p                                                          | missing-upgrade-20211117170335-31976      | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:05:01 PST | Wed, 17 Nov 2021 17:06:17 PST |
	|            | missing-upgrade-20211117170335-31976                        |                                           |          |         |                               |                               |
	|            | --memory=2200 --alsologtostderr -v=1                        |                                           |          |         |                               |                               |
	|            | --driver=docker                                             |                                           |          |         |                               |                               |
	| delete     | -p                                                          | missing-upgrade-20211117170335-31976      | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:06:17 PST | Wed, 17 Nov 2021 17:06:33 PST |
	|            | missing-upgrade-20211117170335-31976                        |                                           |          |         |                               |                               |
	| start      | -p                                                          | kubernetes-upgrade-20211117170535-31976   | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:05:35 PST | Wed, 17 Nov 2021 17:06:43 PST |
	|            | kubernetes-upgrade-20211117170535-31976                     |                                           |          |         |                               |                               |
	|            | --memory=2200                                               |                                           |          |         |                               |                               |
	|            | --kubernetes-version=v1.14.0                                |                                           |          |         |                               |                               |
	|            | --alsologtostderr -v=1 --driver=docker                      |                                           |          |         |                               |                               |
	| stop       | -p                                                          | kubernetes-upgrade-20211117170535-31976   | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:06:43 PST | Wed, 17 Nov 2021 17:06:50 PST |
	|            | kubernetes-upgrade-20211117170535-31976                     |                                           |          |         |                               |                               |
	| delete     | -p                                                          | kubernetes-upgrade-20211117170535-31976   | jenkins  | v1.24.0 | Wed, 17 Nov 2021 17:07:24 PST | Wed, 17 Nov 2021 17:07:25 PST |
	|            | kubernetes-upgrade-20211117170535-31976                     |                                           |          |         |                               |                               |
	|------------|-------------------------------------------------------------|-------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 17:06:50
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 17:06:50.775270   42213 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:06:50.775448   42213 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:06:50.775454   42213 out.go:310] Setting ErrFile to fd 2...
	I1117 17:06:50.775458   42213 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:06:50.775549   42213 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:06:50.775843   42213 out.go:304] Setting JSON to false
	I1117 17:06:50.812010   42213 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11185,"bootTime":1637186425,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:06:50.812131   42213 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:06:50.843714   42213 out.go:176] * [kubernetes-upgrade-20211117170535-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:06:50.843810   42213 notify.go:174] Checking for updates...
	I1117 17:06:50.844089   42213 preload.go:305] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I1117 17:06:50.898832   42213 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:06:50.898973   42213 preload.go:305] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I1117 17:06:50.924089   42213 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:06:50.950286   42213 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:06:50.976317   42213 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:06:50.976754   42213 config.go:176] Loaded profile config "kubernetes-upgrade-20211117170535-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 17:06:50.977101   42213 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 17:06:51.101331   42213 docker.go:132] docker version: linux-20.10.6
	I1117 17:06:51.101450   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:51.327924   42213 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:63 SystemTime:2021-11-18 01:06:51.239695789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:51.375238   42213 out.go:176] * Using the docker driver based on existing profile
	I1117 17:06:51.375302   42213 start.go:280] selected driver: docker
	I1117 17:06:51.375312   42213 start.go:775] validating driver "docker" against &{Name:kubernetes-upgrade-20211117170535-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117170535-31976 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 17:06:51.375441   42213 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 17:06:51.377948   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:51.612929   42213 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:63 SystemTime:2021-11-18 01:06:51.521850889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:51.613064   42213 cni.go:93] Creating CNI manager for ""
	I1117 17:06:51.613077   42213 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 17:06:51.613086   42213 start_flags.go:282] config:
	{Name:kubernetes-upgrade-20211117170535-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:kubernetes-upgrade-20211117170535-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 17:06:51.660640   42213 out.go:176] * Starting control plane node kubernetes-upgrade-20211117170535-31976 in cluster kubernetes-upgrade-20211117170535-31976
	I1117 17:06:51.660681   42213 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 17:06:51.707437   42213 out.go:176] * Pulling base image ...
	I1117 17:06:51.707483   42213 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 17:06:51.707507   42213 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 17:06:51.707535   42213 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 17:06:51.707557   42213 cache.go:57] Caching tarball of preloaded images
	I1117 17:06:51.707722   42213 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 17:06:51.707736   42213 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4-rc.0 on docker
	I1117 17:06:51.708438   42213 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/kubernetes-upgrade-20211117170535-31976/config.json ...
	I1117 17:06:51.865954   42213 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 17:06:51.865969   42213 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 17:06:51.865982   42213 cache.go:206] Successfully downloaded all kic artifacts
	I1117 17:06:51.866041   42213 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117170535-31976: {Name:mk9072703b78acf3b7213aadf906ddf9902cea7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 17:06:51.866133   42213 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117170535-31976" in 69.949µs
	I1117 17:06:51.866163   42213 start.go:93] Skipping create...Using existing machine configuration
	I1117 17:06:51.866172   42213 fix.go:55] fixHost starting: 
	I1117 17:06:51.866445   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	I1117 17:06:52.002809   42213 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117170535-31976: state=Stopped err=<nil>
	W1117 17:06:52.002849   42213 fix.go:134] unexpected machine state, will restart: <nil>
	I1117 17:06:52.029701   42213 out.go:176] * Restarting existing docker container for "kubernetes-upgrade-20211117170535-31976" ...
	I1117 17:06:52.029823   42213 cli_runner.go:115] Run: docker start kubernetes-upgrade-20211117170535-31976
	W1117 17:06:52.757923   42213 cli_runner.go:162] docker start kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:52.758068   42213 cli_runner.go:115] Run: docker inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:06:52.888984   42213 cli_runner.go:162] docker inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:06:52.889028   42213 errors.go:82] Failed to get postmortem inspect. docker inspect kubernetes-upgrade-20211117170535-31976 :docker inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:52.889163   42213 cli_runner.go:115] Run: docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.008402   42213 cli_runner.go:162] docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:06:53.008428   42213 errors.go:89] Failed to get postmortem logs. docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976 :docker logs --timestamps --details kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:53.008510   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:53.182243   42213 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: Bad response from Docker engine] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-pl
ugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:53.182342   42213 errors.go:98] postmortem docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] Experiment
alBuild:false ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: Bad response from Docker engine] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/d
ocker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:53.182475   42213 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117170535-31976] to gather additional debugging logs...
	I1117 17:06:53.182492   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.318601   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:53.318626   42213 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117170535-31976]: docker network inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:53.318637   42213 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117170535-31976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I1117 17:06:53.318731   42213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 17:06:53.488546   42213 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: Bad response from Docker engine] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-pl
ugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 17:06:53.489139   42213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.602437   42213 cli_runner.go:162] docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:53.602544   42213 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 17:06:53.602617   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:53.725618   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:53.725714   42213 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:54.003912   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:54.124594   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:54.124670   42213 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:54.674331   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:54.797671   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:06:54.797760   42213 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:55.454169   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:06:55.574394   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:06:55.574474   42213 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:06:55.574495   42213 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:06:55.574506   42213 fix.go:57] fixHost completed within 3.708253072s
	I1117 17:06:55.574515   42213 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117170535-31976", held for 3.708290894s
	W1117 17:06:55.574528   42213 start.go:532] error starting host: inspecting NetworkSettings.Networks: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 17:06:55.574629   42213 out.go:241] ! StartHost failed, but will try again: inspecting NetworkSettings.Networks: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 17:06:55.574637   42213 start.go:547] Will try again in 5 seconds ...
	I1117 17:07:00.574920   42213 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117170535-31976: {Name:mk9072703b78acf3b7213aadf906ddf9902cea7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 17:07:00.575053   42213 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117170535-31976" in 98.876µs
	I1117 17:07:00.575074   42213 start.go:93] Skipping create...Using existing machine configuration
	I1117 17:07:00.575078   42213 fix.go:55] fixHost starting: 
	I1117 17:07:00.575377   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:00.698108   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:00.698157   42213 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117170535-31976: state= err=unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.698170   42213 fix.go:113] machineExists: false. err=machine does not exist
	I1117 17:07:00.725135   42213 out.go:176] * docker "kubernetes-upgrade-20211117170535-31976" container is missing, will recreate.
	I1117 17:07:00.725189   42213 delete.go:124] DEMOLISHING kubernetes-upgrade-20211117170535-31976 ...
	I1117 17:07:00.725392   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:00.861868   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	W1117 17:07:00.861911   42213 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.861936   42213 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.862364   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:00.978129   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:00.978172   42213 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117170535-31976, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:00.978260   42213 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976
	W1117 17:07:01.092880   42213 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:01.092911   42213 kic.go:360] could not find the container kubernetes-upgrade-20211117170535-31976 to remove it. will try anyways
	I1117 17:07:01.092996   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:01.209201   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	W1117 17:07:01.209238   42213 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:01.209322   42213 cli_runner.go:115] Run: docker exec --privileged -t kubernetes-upgrade-20211117170535-31976 /bin/bash -c "sudo init 0"
	W1117 17:07:01.322461   42213 cli_runner.go:162] docker exec --privileged -t kubernetes-upgrade-20211117170535-31976 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 17:07:01.322486   42213 oci.go:651] error shutdown kubernetes-upgrade-20211117170535-31976: docker exec --privileged -t kubernetes-upgrade-20211117170535-31976 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:02.326114   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:02.442126   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:02.442165   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:02.442173   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:02.442195   42213 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:02.912951   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:03.031243   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:03.031288   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:03.031301   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:03.031332   42213 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:03.924515   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:04.042733   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:04.042773   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:04.042781   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:04.042805   42213 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:04.679507   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:04.801314   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:04.801351   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:04.801358   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:04.801382   42213 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:05.916094   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:06.030465   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:06.030502   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:06.030512   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:06.030543   42213 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:07.552029   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:07.669793   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:07.669838   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:07.669855   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:07.669885   42213 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:10.714642   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:10.845215   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:10.845252   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:10.845259   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:10.845282   42213 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:16.627792   42213 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}
	W1117 17:07:16.746605   42213 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}} returned with exit code 1
	I1117 17:07:16.746646   42213 oci.go:663] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:16.746653   42213 oci.go:665] temporary error: container kubernetes-upgrade-20211117170535-31976 status is  but expect it to be exited
	I1117 17:07:16.746688   42213 oci.go:87] couldn't shut down kubernetes-upgrade-20211117170535-31976 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20211117170535-31976": docker container inspect kubernetes-upgrade-20211117170535-31976 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	 
	I1117 17:07:16.746769   42213 cli_runner.go:115] Run: docker rm -f -v kubernetes-upgrade-20211117170535-31976
	W1117 17:07:16.862743   42213 cli_runner.go:162] docker rm -f -v kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:16.862866   42213 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976
	W1117 17:07:16.978253   42213 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:16.978363   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 17:07:17.089943   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 17:07:17.090053   42213 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117170535-31976] to gather additional debugging logs...
	I1117 17:07:17.090068   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:07:17.204149   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:17.204173   42213 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117170535-31976]: docker network inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:17.204187   42213 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117170535-31976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 17:07:17.204202   42213 network_create.go:284] Error inspecting docker network kubernetes-upgrade-20211117170535-31976: docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 17:07:17.204558   42213 delete.go:139] delete failed (probably ok) <nil>
	I1117 17:07:17.204565   42213 fix.go:120] Sleeping 1 second for extra luck!
	I1117 17:07:18.207305   42213 start.go:126] createHost starting for "" (driver="docker")
	I1117 17:07:18.282955   42213 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 17:07:18.283137   42213 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117170535-31976" (driver="docker")
	I1117 17:07:18.283183   42213 client.go:168] LocalClient.Create starting
	I1117 17:07:18.283353   42213 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem
	I1117 17:07:18.283604   42213 main.go:130] libmachine: Decoding PEM data...
	I1117 17:07:18.283647   42213 main.go:130] libmachine: Parsing certificate...
	I1117 17:07:18.283801   42213 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem
	I1117 17:07:18.304803   42213 main.go:130] libmachine: Decoding PEM data...
	I1117 17:07:18.304837   42213 main.go:130] libmachine: Parsing certificate...
	I1117 17:07:18.305751   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 17:07:18.423985   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 17:07:18.424097   42213 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117170535-31976] to gather additional debugging logs...
	I1117 17:07:18.424116   42213 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117170535-31976
	W1117 17:07:18.536585   42213 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:18.536611   42213 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117170535-31976]: docker network inspect kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.536632   42213 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117170535-31976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I1117 17:07:18.536716   42213 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 17:07:18.649614   42213 cli_runner.go:162] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 17:07:18.649750   42213 network_create.go:254] running [docker network inspect bridge] to gather additional debugging logs...
	I1117 17:07:18.649784   42213 cli_runner.go:115] Run: docker network inspect bridge
	W1117 17:07:18.764966   42213 cli_runner.go:162] docker network inspect bridge returned with exit code 1
	I1117 17:07:18.764992   42213 network_create.go:257] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.765003   42213 network_create.go:259] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 17:07:18.765010   42213 network_create.go:75] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.765230   42213 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005a6110] misses:0}
	I1117 17:07:18.765255   42213 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 17:07:18.765271   42213 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
	I1117 17:07:18.765349   42213 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976
	W1117 17:07:18.877964   42213 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	E1117 17:07:18.878015   42213 network_create.go:95] error while trying to create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24: create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 17:07:18.878162   42213 out.go:241] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117170535-31976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 17:07:18.878277   42213 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	W1117 17:07:18.994646   42213 cli_runner.go:162] docker ps -a --format {{.Names}} returned with exit code 1
	W1117 17:07:18.994676   42213 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:18.994769   42213 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true
	W1117 17:07:19.109297   42213 cli_runner.go:162] docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I1117 17:07:19.109339   42213 client.go:171] LocalClient.Create took 826.13022ms
	I1117 17:07:21.115680   42213 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 17:07:21.115838   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:21.230804   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:21.230882   42213 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:21.410252   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:21.526439   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:21.526527   42213 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:21.859203   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:21.979700   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:21.979785   42213 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:22.440613   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:22.555457   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:07:22.555546   42213 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:07:22.555575   42213 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:22.555599   42213 start.go:129] duration metric: createHost completed in 4.34814707s
	I1117 17:07:22.555658   42213 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 17:07:22.555709   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:22.667949   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:22.668030   42213 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:22.865551   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:22.979098   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:22.979187   42213 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:23.282713   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:23.402324   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	I1117 17:07:23.402401   42213 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:24.065859   42213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976
	W1117 17:07:24.180619   42213 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976 returned with exit code 1
	W1117 17:07:24.180700   42213 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:07:24.180723   42213 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117170535-31976": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117170535-31976: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 17:07:24.180734   42213 fix.go:57] fixHost completed within 23.60512594s
	I1117 17:07:24.180744   42213 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117170535-31976", held for 23.605153823s
	W1117 17:07:24.180884   42213 out.go:241] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117170535-31976" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117170535-31976 container: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 17:07:24.207631   42213 out.go:176] 
	W1117 17:07:24.207883   42213 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117170535-31976 container: docker volume create kubernetes-upgrade-20211117170535-31976 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117170535-31976 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 17:07:24.207899   42213 out.go:241] * 
	W1117 17:07:24.208954   42213 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20211117170633-31976": docker container inspect stopped-upgrade-20211117170633-31976 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.50s)

                                                
                                    
x
+
TestPause/serial/Start (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211117171027-31976 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:78: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20211117171027-31976 --memory=2048 --install-addons=false --wait=all --driver=docker : exit status 69 (446.850602ms)

                                                
                                                
-- stdout --
	* [pause-20211117171027-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
pause_test.go:80: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-20211117171027-31976 --memory=2048 --install-addons=false --wait=all --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (117.06464ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (95.313835ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/Start (0.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (0.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211117171027-31976 --alsologtostderr -v=1 --driver=docker 
pause_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20211117171027-31976 --alsologtostderr -v=1 --driver=docker : exit status 69 (458.858599ms)

                                                
                                                
-- stdout --
	* [pause-20211117171027-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:28.641197   43586 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:28.641396   43586 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:28.641400   43586 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:28.641403   43586 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:28.641466   43586 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:28.641706   43586 out.go:304] Setting JSON to false
	I1117 17:10:28.667661   43586 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11403,"bootTime":1637186425,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:10:28.667759   43586 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:10:28.694546   43586 out.go:176] * [pause-20211117171027-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:10:28.694717   43586 notify.go:174] Checking for updates...
	I1117 17:10:28.741466   43586 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:10:28.767210   43586 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:10:28.793461   43586 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:10:28.819631   43586 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:10:28.820605   43586 config.go:176] Loaded profile config "running-upgrade-20211117170725-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 17:10:28.820687   43586 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:10:28.916364   43586 docker.go:108] docker version returned error: exit status 1
	I1117 17:10:28.943025   43586 out.go:176] * Using the docker driver based on user configuration
	I1117 17:10:28.943049   43586 start.go:280] selected driver: docker
	I1117 17:10:28.943057   43586 start.go:775] validating driver "docker" against <nil>
	I1117 17:10:28.943070   43586 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:10:28.990208   43586 out.go:176] 
	W1117 17:10:28.990421   43586 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:10:28.990506   43586 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:10:29.038114   43586 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:92: failed to second start a running minikube with args: "out/minikube-darwin-amd64 start -p pause-20211117171027-31976 --alsologtostderr -v=1 --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (119.106526ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (93.731499ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (0.67s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211117171027-31976 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p pause-20211117171027-31976 --alsologtostderr -v=5: exit status 85 (93.441161ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:29.313362   43606 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:29.314077   43606 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:29.314083   43606 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:29.314086   43606 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:29.314166   43606 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:29.314340   43606 out.go:304] Setting JSON to false
	I1117 17:10:29.314355   43606 mustload.go:65] Loading cluster: pause-20211117171027-31976
	I1117 17:10:29.340024   43606 out.go:176] * Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:10:29.365752   43606 out.go:176]   To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-darwin-amd64 pause -p pause-20211117171027-31976 --alsologtostderr -v=5" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (116.18729ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (92.944424ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (114.887953ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (94.929647ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20211117171027-31976 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20211117171027-31976 --output=json --layout=cluster: exit status 85 (41.144016ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c43b0eb6-dc80-47dc-b00c-cb70f5f43e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles."}}
	{"specversion":"1.0","id":"2c592b0c-8f09-42eb-9ae0-e51436f1f94e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p pause-20211117171027-31976\""}}

                                                
                                                
-- /stdout --
pause_test.go:194: unmarshalling: invalid character '{' after top-level value
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (114.172677ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (95.516913ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20211117171027-31976 --alsologtostderr -v=5
pause_test.go:119: (dbg) Non-zero exit: out/minikube-darwin-amd64 unpause -p pause-20211117171027-31976 --alsologtostderr -v=5: exit status 85 (95.288704ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:30.078711   43623 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:30.079283   43623 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:30.079289   43623 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:30.079292   43623 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:30.079359   43623 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:30.079615   43623 mustload.go:65] Loading cluster: pause-20211117171027-31976
	I1117 17:10:30.106325   43623 out.go:176] * Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:10:30.132188   43623 out.go:176]   To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
** /stderr **
pause_test.go:121: failed to unpause minikube with args: "out/minikube-darwin-amd64 unpause -p pause-20211117171027-31976 --alsologtostderr -v=5" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (118.334441ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (95.076369ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (118.250509ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (94.502301ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211117171027-31976 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p pause-20211117171027-31976 --alsologtostderr -v=5: exit status 85 (94.879613ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:30.600850   43638 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:30.601045   43638 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:30.601050   43638 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:30.601053   43638 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:30.601126   43638 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:30.601285   43638 out.go:304] Setting JSON to false
	I1117 17:10:30.601300   43638 mustload.go:65] Loading cluster: pause-20211117171027-31976
	I1117 17:10:30.627452   43638 out.go:176] * Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:10:30.654349   43638 out.go:176]   To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-darwin-amd64 pause -p pause-20211117171027-31976 --alsologtostderr -v=5" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (114.867689ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (94.042813ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (116.771234ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (93.521892ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:166: (dbg) Run:  docker ps -a
pause_test.go:166: (dbg) Non-zero exit: docker ps -a: exit status 1 (115.321064ms)

                                                
                                                
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
pause_test.go:171: (dbg) Run:  docker volume inspect pause-20211117171027-31976
pause_test.go:171: (dbg) Non-zero exit: docker volume inspect pause-20211117171027-31976: exit status 1 (118.301439ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
pause_test.go:176: (dbg) Run:  sudo docker network ls
pause_test.go:176: (dbg) Non-zero exit: sudo docker network ls: exit status 1 (137.253632ms)

                                                
                                                
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
pause_test.go:178: failed to get list of networks: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (120.81741ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (99.043891ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117171027-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117171027-31976: exit status 1 (124.825145ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117171027-31976 -n pause-20211117171027-31976: exit status 85 (99.148667ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117171027-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117171027-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117171027-31976" host is not running, skipping log retrieval (state="* Profile \"pause-20211117171027-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117171027-31976\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211117171033-31976 --no-kubernetes --driver=docker 
no_kubernetes_test.go:78: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211117171033-31976 --no-kubernetes --driver=docker : exit status 69 (406.708336ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117171033-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:80: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20211117171033-31976 --no-kubernetes --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117171033-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117171033-31976: exit status 1 (120.76095ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976: exit status 85 (99.481399ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117171033-31976" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117171033-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117171033-31976\"")
--- FAIL: TestNoKubernetes/serial/Start (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:117: expected N/A in the profile list for kubernetes version but got : "out/minikube-darwin-amd64 profile list" : 
-- stdout --
	|--------------------------------------|-----------|---------|----|------|---------|---------|-------|
	|               Profile                | VM Driver | Runtime | IP | Port | Version | Status  | Nodes |
	|--------------------------------------|-----------|---------|----|------|---------|---------|-------|
	| running-upgrade-20211117170725-31976 | docker    | docker  |    | 8443 | v1.18.0 | Unknown |     1 |
	|--------------------------------------|-----------|---------|----|------|---------|---------|-------|

                                                
                                                
-- /stdout --
** stderr ** 
	! Found 2 invalid profile(s) ! 
	* 	 NoKubernetes-20211117171033-31976
	* 	 multinode-20211117163734-31976-m02
	* You can delete them using the following command(s): 
		 $ minikube delete -p NoKubernetes-20211117171033-31976 
		 $ minikube delete -p multinode-20211117163734-31976-m02 

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117171033-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117171033-31976: exit status 1 (115.155447ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976: exit status 85 (93.435193ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117171033-31976" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117171033-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117171033-31976\"")
--- FAIL: TestNoKubernetes/serial/ProfileList (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20211117171033-31976
no_kubernetes_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p NoKubernetes-20211117171033-31976: exit status 85 (93.543522ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
no_kubernetes_test.go:102: Failed to stop minikube "out/minikube-darwin-amd64 stop -p NoKubernetes-20211117171033-31976" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117171033-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117171033-31976: exit status 1 (114.697769ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976: exit status 85 (93.989844ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117171033-31976" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117171033-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117171033-31976\"")
--- FAIL: TestNoKubernetes/serial/Stop (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211117171033-31976 --driver=docker 
no_kubernetes_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211117171033-31976 --driver=docker : exit status 69 (437.710967ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117171033-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:135: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20211117171033-31976 --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117171033-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117171033-31976: exit status 1 (149.600032ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117171033-31976 -n NoKubernetes-20211117171033-31976: exit status 85 (98.073779ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117171033-31976" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117171033-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117171033-31976\"")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : exit status 69 (428.583971ms)

                                                
                                                
-- stdout --
	* [auto-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:10:59.934533   44061 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:10:59.934673   44061 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:59.934678   44061 out.go:310] Setting ErrFile to fd 2...
	I1117 17:10:59.934681   44061 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:10:59.934772   44061 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:10:59.935085   44061 out.go:304] Setting JSON to false
	I1117 17:10:59.960020   44061 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11434,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:10:59.960108   44061 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:10:59.987245   44061 out.go:176] * [auto-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:10:59.987436   44061 notify.go:174] Checking for updates...
	I1117 17:11:00.034912   44061 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:00.060624   44061 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:00.086752   44061 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:00.112672   44061 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:00.112930   44061 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:00.200802   44061 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:00.227520   44061 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:00.227533   44061 start.go:280] selected driver: docker
	I1117 17:11:00.227538   44061 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:00.227553   44061 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:00.276941   44061 out.go:176] 
	W1117 17:11:00.277125   44061 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:00.277209   44061 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:00.302571   44061 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/auto/Start (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : exit status 69 (449.254402ms)

                                                
                                                
-- stdout --
	* [false-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:01.242511   44094 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:01.242698   44094 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:01.242703   44094 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:01.242706   44094 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:01.242785   44094 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:01.243091   44094 out.go:304] Setting JSON to false
	I1117 17:11:01.267811   44094 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11436,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:01.267905   44094 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:01.295133   44094 out.go:176] * [false-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:01.295340   44094 notify.go:174] Checking for updates...
	I1117 17:11:01.342752   44094 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:01.368330   44094 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:01.410528   44094 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:01.437412   44094 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:01.437734   44094 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:01.526383   44094 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:01.553566   44094 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:01.553637   44094 start.go:280] selected driver: docker
	I1117 17:11:01.553653   44094 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:01.553688   44094 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:01.601872   44094 out.go:176] 
	W1117 17:11:01.602087   44094 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:01.602174   44094 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:01.630111   44094 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/false/Start (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cilium-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : exit status 69 (450.046719ms)

                                                
                                                
-- stdout --
	* [cilium-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:02.573932   44127 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:02.574062   44127 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:02.574066   44127 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:02.574070   44127 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:02.574139   44127 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:02.574449   44127 out.go:304] Setting JSON to false
	I1117 17:11:02.599252   44127 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11437,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:02.599348   44127 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:02.626613   44127 out.go:176] * [cilium-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:02.626837   44127 notify.go:174] Checking for updates...
	I1117 17:11:02.675051   44127 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:02.701008   44127 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:02.727148   44127 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:02.752781   44127 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:02.752979   44127 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:02.839183   44127 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:02.865052   44127 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:02.865112   44127 start.go:280] selected driver: docker
	I1117 17:11:02.865123   44127 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:02.865152   44127 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:02.911965   44127 out.go:176] 
	W1117 17:11:02.912184   44127 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:02.912264   44127 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:02.964198   44127 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/cilium/Start (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 69 (427.920832ms)

                                                
                                                
-- stdout --
	* [calico-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:03.950926   44160 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:03.951054   44160 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:03.951059   44160 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:03.951065   44160 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:03.951143   44160 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:03.951446   44160 out.go:304] Setting JSON to false
	I1117 17:11:03.976699   44160 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11438,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:03.976803   44160 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:04.003953   44160 out.go:176] * [calico-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:04.004148   44160 notify.go:174] Checking for updates...
	I1117 17:11:04.052659   44160 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:04.078708   44160 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:04.104223   44160 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:04.130404   44160 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:04.130613   44160 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:04.218336   44160 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:04.244736   44160 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:04.244774   44160 start.go:280] selected driver: docker
	I1117 17:11:04.244791   44160 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:04.244833   44160 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:04.291677   44160 out.go:176] 
	W1117 17:11:04.291898   44160 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:04.292013   44160 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:04.318485   44160 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/calico/Start (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p custom-weave-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : exit status 69 (427.466337ms)

                                                
                                                
-- stdout --
	* [custom-weave-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:05.257392   44195 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:05.257524   44195 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:05.257528   44195 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:05.257532   44195 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:05.257607   44195 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:05.257913   44195 out.go:304] Setting JSON to false
	I1117 17:11:05.282753   44195 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11440,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:05.282842   44195 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:05.310098   44195 out.go:176] * [custom-weave-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:05.357809   44195 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:05.310293   44195 notify.go:174] Checking for updates...
	I1117 17:11:05.384807   44195 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:05.410511   44195 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:05.436645   44195 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:05.437143   44195 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:05.525020   44195 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:05.550582   44195 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:05.550596   44195 start.go:280] selected driver: docker
	I1117 17:11:05.550600   44195 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:05.550610   44195 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:05.597718   44195 out.go:176] 
	W1117 17:11:05.597970   44195 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:05.598055   44195 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:05.623720   44195 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : exit status 69 (406.726641ms)

                                                
                                                
-- stdout --
	* [enable-default-cni-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:06.627053   44228 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:06.627202   44228 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:06.627206   44228 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:06.627209   44228 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:06.627277   44228 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:06.627583   44228 out.go:304] Setting JSON to false
	I1117 17:11:06.652698   44228 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11441,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:06.652788   44228 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:06.679853   44228 out.go:176] * [enable-default-cni-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:06.679978   44228 notify.go:174] Checking for updates...
	I1117 17:11:06.727665   44228 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:06.753718   44228 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:06.779243   44228 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:06.805404   44228 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:06.805640   44228 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:06.893934   44228 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:06.920760   44228 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:06.920818   44228 start.go:280] selected driver: docker
	I1117 17:11:06.920850   44228 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:06.920872   44228 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:06.947262   44228 out.go:176] 
	W1117 17:11:06.947456   44228 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:06.947549   44228 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:06.973456   44228 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20211117170335-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 69 (452.797152ms)

                                                
                                                
-- stdout --
	* [kindnet-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:07.918803   44261 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:07.918933   44261 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:07.918937   44261 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:07.918941   44261 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:07.919015   44261 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:07.919314   44261 out.go:304] Setting JSON to false
	I1117 17:11:07.944346   44261 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11442,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:07.944451   44261 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:07.971019   44261 out.go:176] * [kindnet-20211117170335-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:07.971227   44261 notify.go:174] Checking for updates...
	I1117 17:11:08.019715   44261 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:08.045816   44261 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:08.071890   44261 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:08.097540   44261 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:08.097779   44261 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:08.183308   44261 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:08.210308   44261 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:08.210363   44261 start.go:280] selected driver: docker
	I1117 17:11:08.210374   44261 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:08.210395   44261 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:08.258564   44261 out.go:176] 
	W1117 17:11:08.258676   44261 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:08.258736   44261 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:08.311833   44261 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/kindnet/Start (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p bridge-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : exit status 69 (435.457161ms)

                                                
                                                
-- stdout --
	* [bridge-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:09.256491   44294 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:09.256622   44294 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:09.256627   44294 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:09.256630   44294 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:09.256708   44294 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:09.257021   44294 out.go:304] Setting JSON to false
	I1117 17:11:09.281994   44294 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11444,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:09.282098   44294 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:09.309215   44294 out.go:176] * [bridge-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:09.357859   44294 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:09.309495   44294 notify.go:174] Checking for updates...
	I1117 17:11:09.383809   44294 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:09.409812   44294 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:09.435879   44294 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:09.436292   44294 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:09.529333   44294 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:09.556020   44294 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:09.556094   44294 start.go:280] selected driver: docker
	I1117 17:11:09.556104   44294 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:09.556125   44294 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:09.582017   44294 out.go:176] 
	W1117 17:11:09.582250   44294 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:09.582385   44294 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:09.630959   44294 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20211117170334-31976 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : exit status 69 (450.614693ms)

                                                
                                                
-- stdout --
	* [kubenet-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:10.571855   44327 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:10.572041   44327 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:10.572045   44327 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:10.572049   44327 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:10.572117   44327 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:10.572413   44327 out.go:304] Setting JSON to false
	I1117 17:11:10.597062   44327 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11445,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:10.597166   44327 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:10.624340   44327 out.go:176] * [kubenet-20211117170334-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:10.672017   44327 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:10.624587   44327 notify.go:174] Checking for updates...
	I1117 17:11:10.700684   44327 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:10.726875   44327 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:10.752781   44327 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:10.752990   44327 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:10.839132   44327 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:10.865775   44327 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:10.865799   44327 start.go:280] selected driver: docker
	I1117 17:11:10.865807   44327 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:10.865826   44327 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:10.912879   44327 out.go:176] 
	W1117 17:11:10.913074   44327 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:10.913179   44327 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:10.960899   44327 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20211117171111-31976 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20211117171111-31976 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: exit status 69 (445.848166ms)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117171111-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:11.901361   44360 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:11.901494   44360 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:11.901499   44360 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:11.901503   44360 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:11.901572   44360 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:11.901897   44360 out.go:304] Setting JSON to false
	I1117 17:11:11.927165   44360 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11446,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:11.927291   44360 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:11.954694   44360 out.go:176] * [old-k8s-version-20211117171111-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:11.954895   44360 notify.go:174] Checking for updates...
	I1117 17:11:12.003197   44360 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:12.029381   44360 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:12.055396   44360 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:12.080944   44360 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:12.081705   44360 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:12.170201   44360 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:12.196394   44360 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:12.196450   44360 start.go:280] selected driver: docker
	I1117 17:11:12.196468   44360 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:12.196548   44360 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:12.256135   44360 out.go:176] 
	W1117 17:11:12.256380   44360 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:12.256440   44360 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:12.287149   44360 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20211117171111-31976 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (117.701267ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.716242ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211117171111-31976 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117171111-31976 create -f testdata/busybox.yaml: exit status 1 (39.558498ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117171111-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context old-k8s-version-20211117171111-31976 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (114.391098ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (94.561325ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (115.178772ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (94.776556ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117171111-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117171111-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (101.808614ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "old-k8s-version-20211117171111-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117171111-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211117171111-31976 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117171111-31976 describe deploy/metrics-server -n kube-system: exit status 1 (39.318232ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117171111-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20211117171111-31976 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (123.254308ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (95.139959ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20211117171111-31976 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p old-k8s-version-20211117171111-31976 --alsologtostderr -v=3: exit status 85 (95.361531ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:13.381554   44390 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:13.381699   44390 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:13.381704   44390 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:13.381707   44390 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:13.381776   44390 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:13.381947   44390 out.go:304] Setting JSON to false
	I1117 17:11:13.382064   44390 mustload.go:65] Loading cluster: old-k8s-version-20211117171111-31976
	I1117 17:11:13.408817   44390 out.go:176] * Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:13.434626   44390 out.go:176]   To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p old-k8s-version-20211117171111-31976 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (114.86573ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (95.033171ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (92.991286ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117171111-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117171111-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (102.715849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "old-k8s-version-20211117171111-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117171111-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (117.944836ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (102.957928ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20211117171111-31976 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0
E1117 17:11:14.146106   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20211117171111-31976 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: exit status 69 (428.353406ms)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117171111-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:14.105603   44403 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:14.105746   44403 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:14.105751   44403 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:14.105754   44403 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:14.105825   44403 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:14.106044   44403 out.go:304] Setting JSON to false
	I1117 17:11:14.130912   44403 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11449,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:14.131007   44403 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:14.157093   44403 out.go:176] * [old-k8s-version-20211117171111-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:14.157259   44403 notify.go:174] Checking for updates...
	I1117 17:11:14.205848   44403 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:14.231709   44403 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:14.257817   44403 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:14.283429   44403 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:14.283678   44403 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:14.373228   44403 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:14.398489   44403 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:14.398546   44403 start.go:280] selected driver: docker
	I1117 17:11:14.398560   44403 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:14.398607   44403 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:14.445787   44403 out.go:176] 
	W1117 17:11:14.446007   44403 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:14.446111   44403 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:14.471617   44403 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20211117171111-31976 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (114.44463ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.447586ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117171111-31976" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (113.099867ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.603642ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117171111-31976" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211117171111-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117171111-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (37.944898ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117171111-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20211117171111-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (116.526372ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.831625ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117171111-31976 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117171111-31976 "sudo crictl images -o json": exit status 85 (94.278689ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117171111-31976 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"
start_stop_delete_test.go:289: v1.14.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.3.1",
- 	"k8s.gcr.io/etcd:3.3.10",
- 	"k8s.gcr.io/kube-apiserver:v1.14.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.14.0",
- 	"k8s.gcr.io/kube-proxy:v1.14.0",
- 	"k8s.gcr.io/kube-scheduler:v1.14.0",
- 	"k8s.gcr.io/pause:3.1",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (115.538559ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.656418ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20211117171111-31976 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-20211117171111-31976 --alsologtostderr -v=1: exit status 85 (93.893414ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:15.502642   44432 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:15.502834   44432 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:15.502839   44432 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:15.502842   44432 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:15.502914   44432 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:15.503072   44432 out.go:304] Setting JSON to false
	I1117 17:11:15.503087   44432 mustload.go:65] Loading cluster: old-k8s-version-20211117171111-31976
	I1117 17:11:15.528896   44432 out.go:176] * Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:15.555078   44432 out.go:176]   To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p old-k8s-version-20211117171111-31976 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (115.411022ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.598006ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117171111-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117171111-31976: exit status 1 (115.859495ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117171111-31976 -n old-k8s-version-20211117171111-31976: exit status 85 (93.684666ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117171111-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117171111-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117171111-31976" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117171111-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117171111-31976\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20211117171117-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20211117171117-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (456.572664ms)

                                                
                                                
-- stdout --
	* [no-preload-20211117171117-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:17.392115   44486 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:17.392245   44486 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:17.392250   44486 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:17.392253   44486 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:17.392326   44486 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:17.392644   44486 out.go:304] Setting JSON to false
	I1117 17:11:17.417540   44486 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11452,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:17.417635   44486 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:17.444877   44486 out.go:176] * [no-preload-20211117171117-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:17.445067   44486 notify.go:174] Checking for updates...
	I1117 17:11:17.492166   44486 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:17.519364   44486 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:17.545320   44486 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:17.571045   44486 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:17.571289   44486 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:17.658762   44486 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:17.685718   44486 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:17.685793   44486 start.go:280] selected driver: docker
	I1117 17:11:17.685805   44486 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:17.685827   44486 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:17.734143   44486 out.go:176] 
	W1117 17:11:17.734337   44486 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:17.734442   44486 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:17.787389   44486 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p no-preload-20211117171117-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (113.958118ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.051184ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (0.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211117171117-31976 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context no-preload-20211117171117-31976 create -f testdata/busybox.yaml: exit status 1 (38.465691ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117171117-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context no-preload-20211117171117-31976 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (114.431961ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (165.107457ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (115.364312ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (93.326369ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117171117-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117171117-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (98.703598ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20211117171117-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117171117-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211117171117-31976 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context no-preload-20211117171117-31976 describe deploy/metrics-server -n kube-system: exit status 1 (38.221682ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117171117-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20211117171117-31976 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (115.08369ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (93.14017ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20211117171117-31976 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p no-preload-20211117171117-31976 --alsologtostderr -v=3: exit status 85 (91.833828ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:18.931420   44516 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:18.931552   44516 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:18.931556   44516 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:18.931559   44516 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:18.931633   44516 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:18.931799   44516 out.go:304] Setting JSON to false
	I1117 17:11:18.931917   44516 mustload.go:65] Loading cluster: no-preload-20211117171117-31976
	I1117 17:11:18.957577   44516 out.go:176] * Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:18.983253   44516 out.go:176]   To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p no-preload-20211117171117-31976 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (113.3027ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.6796ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (93.669504ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117171117-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117171117-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (99.445913ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20211117171117-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117171117-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (116.628361ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (93.567397ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20211117171117-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20211117171117-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (430.669857ms)

                                                
                                                
-- stdout --
	* [no-preload-20211117171117-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:19.636695   44529 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:19.636881   44529 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:19.636886   44529 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:19.636889   44529 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:19.636954   44529 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:19.637170   44529 out.go:304] Setting JSON to false
	I1117 17:11:19.661989   44529 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11454,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:19.662199   44529 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:19.689105   44529 out.go:176] * [no-preload-20211117171117-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:19.689278   44529 notify.go:174] Checking for updates...
	I1117 17:11:19.736051   44529 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:19.761957   44529 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:19.788074   44529 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:19.813823   44529 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:19.814215   44529 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:19.905014   44529 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:19.931846   44529 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:19.931932   44529 start.go:280] selected driver: docker
	I1117 17:11:19.931947   44529 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:19.931965   44529 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:19.980728   44529 out.go:176] 
	W1117 17:11:19.980942   44529 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:19.981004   44529 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:20.006471   44529 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-20211117171117-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (116.790679ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.210372ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (0.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117171117-31976" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (113.012085ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.381008ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117171117-31976" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211117171117-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20211117171117-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (38.601731ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117171117-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20211117171117-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (116.55975ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.251525ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20211117171117-31976 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-20211117171117-31976 "sudo crictl images -o json": exit status 85 (92.493746ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-20211117171117-31976 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (114.797723ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.969745ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20211117171117-31976 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-20211117171117-31976 --alsologtostderr -v=1: exit status 85 (94.747587ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:21.041871   44558 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:21.042062   44558 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:21.042067   44558 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:21.042070   44558 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:21.042138   44558 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:21.042296   44558 out.go:304] Setting JSON to false
	I1117 17:11:21.042310   44558 mustload.go:65] Loading cluster: no-preload-20211117171117-31976
	I1117 17:11:21.068653   44558 out.go:176] * Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:21.094785   44558 out.go:176]   To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p no-preload-20211117171117-31976 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (114.475511ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (93.157117ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117171117-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117171117-31976: exit status 1 (113.45638ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117171117-31976 -n no-preload-20211117171117-31976: exit status 85 (94.57873ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117171117-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117171117-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117171117-31976" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117171117-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117171117-31976\"")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20211117171122-31976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20211117171122-31976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (407.045787ms)

                                                
                                                
-- stdout --
	* [embed-certs-20211117171122-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:22.923474   44612 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:22.923604   44612 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:22.923609   44612 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:22.923612   44612 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:22.923692   44612 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:22.923993   44612 out.go:304] Setting JSON to false
	I1117 17:11:22.948995   44612 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11457,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:22.949086   44612 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:22.976040   44612 out.go:176] * [embed-certs-20211117171122-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:23.001844   44612 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:22.976286   44612 notify.go:174] Checking for updates...
	I1117 17:11:23.027672   44612 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:23.053963   44612 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:23.079891   44612 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:23.081212   44612 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:23.168238   44612 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:23.194928   44612 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:23.194977   44612 start.go:280] selected driver: docker
	I1117 17:11:23.194994   44612 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:23.195056   44612 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:23.242905   44612 out.go:176] 
	W1117 17:11:23.243128   44612 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:23.243228   44612 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:23.270752   44612 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-20211117171122-31976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (116.58481ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.554592ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (0.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211117171122-31976 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context embed-certs-20211117171122-31976 create -f testdata/busybox.yaml: exit status 1 (39.368157ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117171122-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context embed-certs-20211117171122-31976 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (116.591392ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (94.275643ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (117.605111ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.90504ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117171122-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117171122-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (93.73135ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20211117171122-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117171122-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211117171122-31976 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context embed-certs-20211117171122-31976 describe deploy/metrics-server -n kube-system: exit status 1 (39.303486ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117171122-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20211117171122-31976 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (115.693673ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.67503ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20211117171122-31976 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p embed-certs-20211117171122-31976 --alsologtostderr -v=3: exit status 85 (93.22186ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:24.348207   44642 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:24.348392   44642 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:24.348396   44642 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:24.348400   44642 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:24.348475   44642 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:24.348638   44642 out.go:304] Setting JSON to false
	I1117 17:11:24.348756   44642 mustload.go:65] Loading cluster: embed-certs-20211117171122-31976
	I1117 17:11:24.374431   44642 out.go:176] * Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:24.400164   44642 out.go:176]   To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p embed-certs-20211117171122-31976 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (116.19737ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.581223ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (92.361166ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117171122-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117171122-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (93.258867ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20211117171122-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117171122-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (114.763631ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.434769ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20211117171122-31976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20211117171122-31976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (423.972225ms)

                                                
                                                
-- stdout --
	* [embed-certs-20211117171122-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:25.045722   44655 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:25.045846   44655 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:25.045851   44655 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:25.045854   44655 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:25.045935   44655 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:25.046158   44655 out.go:304] Setting JSON to false
	I1117 17:11:25.070919   44655 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11460,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:25.071015   44655 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:25.098193   44655 out.go:176] * [embed-certs-20211117171122-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:25.098381   44655 notify.go:174] Checking for updates...
	I1117 17:11:25.145904   44655 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:25.171898   44655 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:25.197708   44655 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:25.223638   44655 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:25.223869   44655 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:25.308990   44655 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:25.335740   44655 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:25.335762   44655 start.go:280] selected driver: docker
	I1117 17:11:25.335769   44655 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:25.335780   44655 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:25.382705   44655 out.go:176] 
	W1117 17:11:25.382880   44655 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:25.382967   44655 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:25.409458   44655 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p embed-certs-20211117171122-31976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (117.688353ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.706523ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (0.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117171122-31976" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (114.430386ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (94.769853ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117171122-31976" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211117171122-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211117171122-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (40.562488ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117171122-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20211117171122-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (115.320159ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (94.505829ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20211117171122-31976 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p embed-certs-20211117171122-31976 "sudo crictl images -o json": exit status 85 (94.951509ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p embed-certs-20211117171122-31976 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (115.775613ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (96.640928ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20211117171122-31976 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p embed-certs-20211117171122-31976 --alsologtostderr -v=1: exit status 85 (109.283936ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:26.469415   44684 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:26.469603   44684 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:26.469607   44684 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:26.469610   44684 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:26.469669   44684 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:26.469825   44684 out.go:304] Setting JSON to false
	I1117 17:11:26.469839   44684 mustload.go:65] Loading cluster: embed-certs-20211117171122-31976
	I1117 17:11:26.494325   44684 out.go:176] * Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:26.520300   44684 out.go:176]   To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p embed-certs-20211117171122-31976 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (116.252291ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (94.681038ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117171122-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117171122-31976: exit status 1 (114.489085ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117171122-31976 -n embed-certs-20211117171122-31976: exit status 85 (93.346148ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117171122-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117171122-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117171122-31976" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117171122-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117171122-31976\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117171128-31976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117171128-31976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (426.380514ms)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117171128-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:29.031747   44759 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:29.031878   44759 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:29.031882   44759 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:29.031885   44759 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:29.031958   44759 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:29.032270   44759 out.go:304] Setting JSON to false
	I1117 17:11:29.057267   44759 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11464,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:29.057368   44759 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:29.084558   44759 out.go:176] * [default-k8s-different-port-20211117171128-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:29.084764   44759 notify.go:174] Checking for updates...
	I1117 17:11:29.133281   44759 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:29.159034   44759 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:29.185042   44759 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:29.211121   44759 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:29.211328   44759 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:29.298305   44759 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:29.324996   44759 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:29.325016   44759 start.go:280] selected driver: docker
	I1117 17:11:29.325024   44759 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:29.325045   44759 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:29.371851   44759 out.go:176] 
	W1117 17:11:29.372031   44759 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:29.372113   44759 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:29.397741   44759 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117171128-31976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (113.282924ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (94.969643ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211117171128-31976 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117171128-31976 create -f testdata/busybox.yaml: exit status 1 (38.08757ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117171128-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context default-k8s-different-port-20211117171128-31976 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (115.267222ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (95.235948ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (114.699434ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (93.470935ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117171128-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117171128-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (98.63833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20211117171128-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117171128-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211117171128-31976 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117171128-31976 describe deploy/metrics-server -n kube-system: exit status 1 (39.151299ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117171128-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20211117171128-31976 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (114.607991ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (94.836481ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117171128-31976 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117171128-31976 --alsologtostderr -v=3: exit status 85 (93.654529ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:30.473118   44789 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:30.473253   44789 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:30.473258   44789 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:30.473261   44789 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:30.473332   44789 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:30.473507   44789 out.go:304] Setting JSON to false
	I1117 17:11:30.473628   44789 mustload.go:65] Loading cluster: default-k8s-different-port-20211117171128-31976
	I1117 17:11:30.499940   44789 out.go:176] * Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:30.526094   44789 out.go:176]   To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117171128-31976 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (114.061279ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (97.652515ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (93.59421ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117171128-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117171128-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (98.661052ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20211117171128-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117171128-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (118.103796ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (94.276605ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117171128-31976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117171128-31976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (429.900171ms)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117171128-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:31.185751   44802 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:31.185878   44802 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:31.185883   44802 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:31.185886   44802 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:31.185955   44802 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:31.186188   44802 out.go:304] Setting JSON to false
	I1117 17:11:31.211019   44802 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11466,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:31.211123   44802 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:31.238342   44802 out.go:176] * [default-k8s-different-port-20211117171128-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:31.238612   44802 notify.go:174] Checking for updates...
	I1117 17:11:31.285796   44802 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:31.312016   44802 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:31.337874   44802 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:31.363559   44802 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:31.363861   44802 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:31.454229   44802 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:31.481012   44802 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:31.481028   44802 start.go:280] selected driver: docker
	I1117 17:11:31.481036   44802 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:31.481047   44802 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:31.527633   44802 out.go:176] 
	W1117 17:11:31.527803   44802 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:31.527890   44802 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:31.554822   44802 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117171128-31976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (115.242623ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (93.923103ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117171128-31976" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (114.944156ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (95.258535ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117171128-31976" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211117171128-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117171128-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (41.082187ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117171128-31976" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20211117171128-31976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (113.353492ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (93.373406ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117171128-31976 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117171128-31976 "sudo crictl images -o json": exit status 85 (93.088043ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117171128-31976 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (114.74862ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (94.020441ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117171128-31976 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117171128-31976 --alsologtostderr -v=1: exit status 85 (93.67513ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:32.588075   44831 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:32.588275   44831 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:32.588280   44831 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:32.588283   44831 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:32.588357   44831 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:32.588512   44831 out.go:304] Setting JSON to false
	I1117 17:11:32.588527   44831 mustload.go:65] Loading cluster: default-k8s-different-port-20211117171128-31976
	I1117 17:11:32.614393   44831 out.go:176] * Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:32.641051   44831 out.go:176]   To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117171128-31976 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (113.708256ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (94.877982ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117171128-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117171128-31976: exit status 1 (114.352001ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117171128-31976 -n default-k8s-different-port-20211117171128-31976: exit status 85 (94.071626ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117171128-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117171128-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117171128-31976" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117171128-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117171128-31976\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (0.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20211117171134-31976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20211117171134-31976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (405.967398ms)

                                                
                                                
-- stdout --
	* [newest-cni-20211117171134-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:34.545138   44885 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:34.545278   44885 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:34.545283   44885 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:34.545286   44885 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:34.545365   44885 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:34.545687   44885 out.go:304] Setting JSON to false
	I1117 17:11:34.570603   44885 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11469,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:34.570692   44885 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:34.597863   44885 out.go:176] * [newest-cni-20211117171134-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:34.598044   44885 notify.go:174] Checking for updates...
	I1117 17:11:34.624644   44885 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:34.650412   44885 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:34.676501   44885 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:34.702377   44885 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:34.702599   44885 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:34.789771   44885 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:34.816949   44885 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:34.817030   44885 start.go:280] selected driver: docker
	I1117 17:11:34.817044   44885 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:34.817080   44885 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:34.864501   44885 out.go:176] 
	W1117 17:11:34.864725   44885 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:34.864822   44885 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:34.890597   44885 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p newest-cni-20211117171134-31976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (114.952333ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (94.563304ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (0.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117171134-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117171134-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (104.950645ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20211117171134-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117171134-31976 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (114.400342ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (96.388379ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20211117171134-31976 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p newest-cni-20211117171134-31976 --alsologtostderr -v=3: exit status 85 (93.995278ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:35.478179   44906 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:35.478322   44906 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:35.478330   44906 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:35.478334   44906 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:35.478411   44906 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:35.478567   44906 out.go:304] Setting JSON to false
	I1117 17:11:35.478684   44906 mustload.go:65] Loading cluster: newest-cni-20211117171134-31976
	I1117 17:11:35.505178   44906 out.go:176] * Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:35.531198   44906 out.go:176]   To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p newest-cni-20211117171134-31976 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (114.425722ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (95.889463ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (93.569307ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117171134-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117171134-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (99.920455ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20211117171134-31976" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117171134-31976 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (111.848858ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (95.28825ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20211117171134-31976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20211117171134-31976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (431.75236ms)

                                                
                                                
-- stdout --
	* [newest-cni-20211117171134-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:36.186143   44919 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:36.186278   44919 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:36.186282   44919 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:36.186286   44919 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:36.186358   44919 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:36.186583   44919 out.go:304] Setting JSON to false
	I1117 17:11:36.211631   44919 start.go:112] hostinfo: {"hostname":"37310.local","uptime":11471,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 17:11:36.211734   44919 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 17:11:36.239230   44919 out.go:176] * [newest-cni-20211117171134-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 17:11:36.239403   44919 notify.go:174] Checking for updates...
	I1117 17:11:36.293895   44919 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 17:11:36.318520   44919 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 17:11:36.344458   44919 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 17:11:36.370369   44919 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 17:11:36.370580   44919 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 17:11:36.455774   44919 docker.go:108] docker version returned error: exit status 1
	I1117 17:11:36.482850   44919 out.go:176] * Using the docker driver based on user configuration
	I1117 17:11:36.483002   44919 start.go:280] selected driver: docker
	I1117 17:11:36.483011   44919 start.go:775] validating driver "docker" against <nil>
	I1117 17:11:36.483032   44919 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 17:11:36.530399   44919 out.go:176] 
	W1117 17:11:36.530638   44919 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 17:11:36.530718   44919 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 17:11:36.556445   44919 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p newest-cni-20211117171134-31976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (119.088528ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (94.39568ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (0.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20211117171134-31976 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p newest-cni-20211117171134-31976 "sudo crictl images -o json": exit status 85 (93.694005ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p newest-cni-20211117171134-31976 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (114.756172ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (94.577967ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20211117171134-31976 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p newest-cni-20211117171134-31976 --alsologtostderr -v=1: exit status 85 (92.997442ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 17:11:37.134925   44937 out.go:297] Setting OutFile to fd 1 ...
	I1117 17:11:37.135052   44937 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:37.135057   44937 out.go:310] Setting ErrFile to fd 2...
	I1117 17:11:37.135060   44937 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 17:11:37.135141   44937 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 17:11:37.135310   44937 out.go:304] Setting JSON to false
	I1117 17:11:37.135325   44937 mustload.go:65] Loading cluster: newest-cni-20211117171134-31976
	I1117 17:11:37.161789   44937 out.go:176] * Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	I1117 17:11:37.187748   44937 out.go:176]   To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p newest-cni-20211117171134-31976 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (113.579848ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (94.035373ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117171134-31976
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117171134-31976: exit status 1 (117.076129ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117171134-31976 -n newest-cni-20211117171134-31976: exit status 85 (94.097293ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117171134-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117171134-31976"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117171134-31976" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117171134-31976\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117171134-31976\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.51s)
E1117 17:12:08.602607   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory
E1117 17:12:20.412832   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 17:12:36.374401   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/skaffold-20211117170024-31976/client.crt: no such file or directory

                                                
                                    

Test pass (140/245)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 20.31
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.28
10 TestDownloadOnly/v1.22.3/json-events 8.34
11 TestDownloadOnly/v1.22.3/preload-exists 0
14 TestDownloadOnly/v1.22.3/kubectl 0
15 TestDownloadOnly/v1.22.3/LogsDuration 0.27
17 TestDownloadOnly/v1.22.4-rc.0/json-events 9.76
18 TestDownloadOnly/v1.22.4-rc.0/preload-exists 0
21 TestDownloadOnly/v1.22.4-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.4-rc.0/LogsDuration 0.27
23 TestDownloadOnly/DeleteAll 1.13
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.65
25 TestDownloadOnlyKic 8.47
26 TestOffline 120.62
28 TestAddons/Setup 228.52
32 TestAddons/parallel/MetricsServer 5.85
33 TestAddons/parallel/HelmTiller 11.44
34 TestAddons/parallel/Olm 51.95
35 TestAddons/parallel/CSI 56.96
37 TestAddons/serial/GCPAuth 16.04
38 TestAddons/StoppedEnableDisable 18.17
45 TestHyperKitDriverInstallOrUpdate 8.96
48 TestErrorSpam/setup 70.63
49 TestErrorSpam/start 2.36
50 TestErrorSpam/status 1.96
51 TestErrorSpam/pause 2.21
52 TestErrorSpam/unpause 2.21
53 TestErrorSpam/stop 18.1
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 123.89
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 7.52
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 1.67
65 TestFunctional/serial/CacheCmd/cache/add_local 2.07
71 TestFunctional/serial/MinikubeKubectlCmd 0.47
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.55
75 TestFunctional/serial/LogsCmd 3.02
76 TestFunctional/serial/LogsFileCmd 3.19
78 TestFunctional/parallel/ConfigCmd 0.39
79 TestFunctional/parallel/DashboardCmd 4.11
80 TestFunctional/parallel/DryRun 1.61
81 TestFunctional/parallel/InternationalLanguage 0.77
82 TestFunctional/parallel/StatusCmd 2.34
86 TestFunctional/parallel/AddonsCmd 0.29
87 TestFunctional/parallel/PersistentVolumeClaim 27.57
89 TestFunctional/parallel/SSHCmd 1.45
90 TestFunctional/parallel/CpCmd 1.39
91 TestFunctional/parallel/MySQL 25.14
92 TestFunctional/parallel/FileSync 0.71
93 TestFunctional/parallel/CertSync 4.22
97 TestFunctional/parallel/NodeLabels 0.05
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
101 TestFunctional/parallel/ImageCommands/ImageList 0.46
102 TestFunctional/parallel/ImageCommands/ImageBuild 4.09
103 TestFunctional/parallel/ImageCommands/Setup 4.23
104 TestFunctional/parallel/DockerEnv/bash 2.54
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.47
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.9
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.37
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.9
109 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.57
110 TestFunctional/parallel/ImageCommands/ImageRemove 1.05
111 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.45
112 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.71
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 12.06
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.85
124 TestFunctional/parallel/ProfileCmd/profile_list 0.91
126 TestFunctional/parallel/MountCmd/specific-port 3.7
127 TestFunctional/parallel/ProfileCmd/profile_json_output 1.17
128 TestFunctional/parallel/Version/short 0.1
129 TestFunctional/parallel/Version/components 1.2
130 TestFunctional/delete_addon-resizer_images 0.29
131 TestFunctional/delete_my-image_image 0.12
132 TestFunctional/delete_minikube_cached_images 0.12
135 TestIngressAddonLegacy/StartLegacyK8sCluster 131.08
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 22.44
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
142 TestJSONOutput/start/Command 122.94
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.88
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.77
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 18.23
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.78
167 TestKicCustomNetwork/create_custom_network 86.91
168 TestKicCustomNetwork/use_default_bridge_network 70.53
169 TestKicExistingNetwork 86.77
170 TestMainNoArgs 0.07
173 TestMountStart/serial/StartWithMountFirst 70.76
174 TestMountStart/serial/StartWithMountSecond 60.64
175 TestMountStart/serial/VerifyMountFirst 0.63
176 TestMountStart/serial/VerifyMountSecond 0.64
177 TestMountStart/serial/DeleteFirst 12.22
178 TestMountStart/serial/VerifyMountPostDelete 0.66
179 TestMountStart/serial/Stop 17.89
180 TestMountStart/serial/RestartStopped 48.6
181 TestMountStart/serial/VerifyMountPostStop 0.62
184 TestMultiNode/serial/FreshStart2Nodes 218.16
185 TestMultiNode/serial/DeployApp2Nodes 6.62
186 TestMultiNode/serial/PingHostFrom2Pods 0.88
187 TestMultiNode/serial/AddNode 107.53
188 TestMultiNode/serial/ProfileList 0.69
189 TestMultiNode/serial/CopyFile 5.27
190 TestMultiNode/serial/StopNode 11.97
191 TestMultiNode/serial/StartAfterStop 53.55
192 TestMultiNode/serial/RestartKeepsNodes 249.82
193 TestMultiNode/serial/DeleteNode 17.42
194 TestMultiNode/serial/StopMultiNode 35.64
195 TestMultiNode/serial/RestartMultiNode 150.73
196 TestMultiNode/serial/ValidateNameConflict 95.45
200 TestPreload 239.36
202 TestScheduledStopUnix 154.13
203 TestSkaffold 127.63
205 TestInsufficientStorage 62.27
209 TestMissingContainerUpgrade 177.64
221 TestStoppedBinaryUpgrade/Setup 0.93
238 TestPause/serial/DeletePaused 0.67
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.1
246 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.1
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.56
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.11
304 TestStartStop/group/newest-cni/serial/DeployApp 0
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.14.0/json-events (20.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117161028-31976 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117161028-31976 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker : (20.309834858s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (20.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117161028-31976
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117161028-31976: exit status 85 (276.891211ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 16:10:28
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 16:10:28.244403   31993 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:10:28.244541   31993 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:10:28.244546   31993 out.go:310] Setting ErrFile to fd 2...
	I1117 16:10:28.244550   31993 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:10:28.244629   31993 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	W1117 16:10:28.244721   31993 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: no such file or directory
	I1117 16:10:28.245175   31993 out.go:304] Setting JSON to true
	I1117 16:10:28.272117   31993 start.go:112] hostinfo: {"hostname":"37310.local","uptime":7803,"bootTime":1637186425,"procs":369,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:10:28.272224   31993 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:10:28.299393   31993 notify.go:174] Checking for updates...
	W1117 16:10:28.299408   31993 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 16:10:28.326989   31993 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 16:10:28.409472   31993 docker.go:108] docker version returned error: exit status 1
	I1117 16:10:28.436380   31993 start.go:280] selected driver: docker
	I1117 16:10:28.436396   31993 start.go:775] validating driver "docker" against <nil>
	I1117 16:10:28.436506   31993 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:10:28.601524   31993 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:10:28.654350   31993 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:10:28.821537   31993 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:10:28.848549   31993 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 16:10:28.903157   31993 start_flags.go:349] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1117 16:10:28.903281   31993 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 16:10:28.903299   31993 cni.go:93] Creating CNI manager for ""
	I1117 16:10:28.903307   31993 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:10:28.903315   31993 start_flags.go:282] config:
	{Name:download-only-20211117161028-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117161028-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:10:28.929167   31993 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 16:10:28.971054   31993 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 16:10:28.971110   31993 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 16:10:28.971342   31993 cache.go:107] acquiring lock: {Name:mk9c639d04569cd41b690ba84db3b28513e97efc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.972514   31993 cache.go:107] acquiring lock: {Name:mkc6b8c18af8e6076a0180035ca9830d9fba00a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.971350   31993 cache.go:107] acquiring lock: {Name:mk180c40b6e1f2da8cb64a8b99cac3b86dc9de06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.972621   31993 cache.go:107] acquiring lock: {Name:mk8f620d9b10efa13248e729cb37388865ee3fdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973382   31993 cache.go:107] acquiring lock: {Name:mk7325d2596d0f11fb08f19b2f2e688771d0ae1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973409   31993 cache.go:107] acquiring lock: {Name:mkb027ade3a4c65ac16f2b72910962d5de390152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973440   31993 cache.go:107] acquiring lock: {Name:mkd7780cf8d25b6405f749ee20a4e0f719dad578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973447   31993 cache.go:107] acquiring lock: {Name:mk5c36936c371747dc4933dd61c059617f61b466 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973483   31993 cache.go:107] acquiring lock: {Name:mk05b37b08cc31cd7ce6d446aeb6ef5e3eac5f50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973487   31993 cache.go:107] acquiring lock: {Name:mk5d210c77f7868c7ca57bd26ca572ced5606c1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:10:28.973645   31993 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/download-only-20211117161028-31976/config.json ...
	I1117 16:10:28.973689   31993 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/download-only-20211117161028-31976/config.json: {Name:mkd752e611167965cd914d4dac9168c36679287f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:10:28.973874   31993 image.go:134] retrieving image: k8s.gcr.io/coredns:1.3.1
	I1117 16:10:28.973877   31993 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.14.0
	I1117 16:10:28.973906   31993 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.14.0
	I1117 16:10:28.973885   31993 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.10
	I1117 16:10:28.973886   31993 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.14.0
	I1117 16:10:28.973976   31993 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1117 16:10:28.973998   31993 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1117 16:10:28.974006   31993 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I1117 16:10:28.974060   31993 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.14.0
	I1117 16:10:28.974116   31993 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 16:10:28.974291   31993 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 16:10:28.974653   31993 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/linux/v1.14.0/kubeadm
	I1117 16:10:28.974659   31993 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/linux/v1.14.0/kubelet
	I1117 16:10:28.974678   31993 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/linux/v1.14.0/kubectl
	I1117 16:10:28.974659   31993 image.go:176] found k8s.gcr.io/coredns:1.3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.3.1 original:k8s.gcr.io/coredns:1.3.1} opener:0xc00036c000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.974695   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1
	I1117 16:10:28.975326   31993 image.go:176] found index.docker.io/kubernetesui/dashboard:v2.3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:index.docker.io} repository:kubernetesui/dashboard} tag:v2.3.1 original:docker.io/kubernetesui/dashboard:v2.3.1} opener:0xc00036c070 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.975350   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I1117 16:10:28.975871   31993 image.go:176] found k8s.gcr.io/kube-proxy:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.14.0 original:k8s.gcr.io/kube-proxy:v1.14.0} opener:0xc000b022a0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.975898   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0
	I1117 16:10:28.975979   31993 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:k8s-minikube/storage-provisioner} tag:v5 original:gcr.io/k8s-minikube/storage-provisioner:v5} opener:0xc000420000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.975997   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I1117 16:10:28.976055   31993 image.go:176] found k8s.gcr.io/pause:3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.1 original:k8s.gcr.io/pause:3.1} opener:0xc000b02380 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.976067   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I1117 16:10:28.976173   31993 image.go:176] found k8s.gcr.io/kube-scheduler:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.14.0 original:k8s.gcr.io/kube-scheduler:v1.14.0} opener:0xc000338070 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.976205   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
	I1117 16:10:28.976218   31993 image.go:176] found index.docker.io/kubernetesui/metrics-scraper:v1.0.7 locally: &{ref:{Repository:{Registry:{insecure:false registry:index.docker.io} repository:kubernetesui/metrics-scraper} tag:v1.0.7 original:docker.io/kubernetesui/metrics-scraper:v1.0.7} opener:0xc000420310 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.976263   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I1117 16:10:28.976277   31993 cache.go:96] cache image "k8s.gcr.io/coredns:1.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1" took 2.900438ms
	I1117 16:10:28.976431   31993 image.go:176] found k8s.gcr.io/kube-apiserver:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.14.0 original:k8s.gcr.io/kube-apiserver:v1.14.0} opener:0xc0003a40e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.976448   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
	I1117 16:10:28.976444   31993 image.go:176] found k8s.gcr.io/etcd:3.3.10 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.3.10 original:k8s.gcr.io/etcd:3.3.10} opener:0xc00043c000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.976478   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10
	I1117 16:10:28.976749   31993 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 3.334706ms
	I1117 16:10:28.976827   31993 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0" took 3.437379ms
	I1117 16:10:28.976902   31993 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 4.247252ms
	I1117 16:10:28.977077   31993 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0" took 4.679956ms
	I1117 16:10:28.977098   31993 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.14.0 original:k8s.gcr.io/kube-controller-manager:v1.14.0} opener:0xc0003a4230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 16:10:28.977121   31993 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
	I1117 16:10:28.977143   31993 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.10" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10" took 5.763963ms
	I1117 16:10:28.977210   31993 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 5.041918ms
	I1117 16:10:28.977327   31993 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 6.002948ms
	I1117 16:10:28.977363   31993 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0" took 4.785405ms
	I1117 16:10:28.977452   31993 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0" took 6.129052ms
	I1117 16:10:29.080420   31993 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 16:10:29.080580   31993 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 16:10:29.080660   31993 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 16:10:31.458535   31993 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/darwin/v1.14.0/kubectl
	E1117 16:10:32.552550   31993 cache.go:215] Error caching images:  Caching images for kubeadm: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1": write: unable to calculate manifest: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117161028-31976"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/json-events (8.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117161028-31976 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117161028-31976 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker : (8.342263262s)
--- PASS: TestDownloadOnly/v1.22.3/json-events (8.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/preload-exists
--- PASS: TestDownloadOnly/v1.22.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/kubectl
--- PASS: TestDownloadOnly/v1.22.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117161028-31976
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117161028-31976: exit status 85 (274.53639ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 16:10:56
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 16:10:56.914223   32040 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:10:56.914432   32040 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:10:56.914437   32040 out.go:310] Setting ErrFile to fd 2...
	I1117 16:10:56.914440   32040 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:10:56.914519   32040 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	W1117 16:10:56.914599   32040 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: no such file or directory
	I1117 16:10:56.914782   32040 out.go:304] Setting JSON to true
	I1117 16:10:56.941300   32040 start.go:112] hostinfo: {"hostname":"37310.local","uptime":7831,"bootTime":1637186425,"procs":367,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:10:56.941392   32040 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:10:56.971427   32040 notify.go:174] Checking for updates...
	W1117 16:10:56.971427   32040 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 16:10:56.997547   32040 config.go:176] Loaded profile config "download-only-20211117161028-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	W1117 16:10:56.997648   32040 start.go:683] api.Load failed for download-only-20211117161028-31976: filestore "download-only-20211117161028-31976": Docker machine "download-only-20211117161028-31976" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 16:10:56.997725   32040 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 16:10:56.997762   32040 start.go:683] api.Load failed for download-only-20211117161028-31976: filestore "download-only-20211117161028-31976": Docker machine "download-only-20211117161028-31976" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 16:10:57.091029   32040 docker.go:132] docker version: linux-20.10.6
	I1117 16:10:57.091134   32040 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:10:57.267120   32040 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-18 00:10:57.205696469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:10:57.294146   32040 start.go:280] selected driver: docker
	I1117 16:10:57.294175   32040 start.go:775] validating driver "docker" against &{Name:download-only-20211117161028-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117161028-31976 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:10:57.294605   32040 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:10:57.470710   32040 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-18 00:10:57.409996147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:10:57.472824   32040 cni.go:93] Creating CNI manager for ""
	I1117 16:10:57.472843   32040 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:10:57.472853   32040 start_flags.go:282] config:
	{Name:download-only-20211117161028-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117161028-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:10:57.499899   32040 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 16:10:57.525513   32040 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 16:10:57.525514   32040 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 16:10:57.600032   32040 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 16:10:57.600082   32040 cache.go:57] Caching tarball of preloaded images
	I1117 16:10:57.600342   32040 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 16:10:57.626388   32040 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 16:10:57.644796   32040 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 16:10:57.644807   32040 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 16:10:57.721291   32040 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4?checksum=md5:b55c92a19bc9eceed8b554be67ddf54e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117161028-31976"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.3/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/json-events (9.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117161028-31976 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117161028-31976 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker : (9.756731642s)
--- PASS: TestDownloadOnly/v1.22.4-rc.0/json-events (9.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117161028-31976
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117161028-31976: exit status 85 (273.305271ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 16:11:05
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 16:11:05.523389   32069 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:11:05.523508   32069 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:11:05.523513   32069 out.go:310] Setting ErrFile to fd 2...
	I1117 16:11:05.523516   32069 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:11:05.523591   32069 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	W1117 16:11:05.523682   32069 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: no such file or directory
	I1117 16:11:05.523816   32069 out.go:304] Setting JSON to true
	I1117 16:11:05.548932   32069 start.go:112] hostinfo: {"hostname":"37310.local","uptime":7840,"bootTime":1637186425,"procs":367,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:11:05.549027   32069 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:11:05.575564   32069 notify.go:174] Checking for updates...
	I1117 16:11:05.603597   32069 config.go:176] Loaded profile config "download-only-20211117161028-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	W1117 16:11:05.603720   32069 start.go:683] api.Load failed for download-only-20211117161028-31976: filestore "download-only-20211117161028-31976": Docker machine "download-only-20211117161028-31976" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 16:11:05.603803   32069 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 16:11:05.603833   32069 start.go:683] api.Load failed for download-only-20211117161028-31976: filestore "download-only-20211117161028-31976": Docker machine "download-only-20211117161028-31976" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 16:11:05.695780   32069 docker.go:132] docker version: linux-20.10.6
	I1117 16:11:05.695916   32069 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:11:05.869266   32069 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2021-11-18 00:11:05.814403323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:11:05.896436   32069 start.go:280] selected driver: docker
	I1117 16:11:05.896466   32069 start.go:775] validating driver "docker" against &{Name:download-only-20211117161028-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117161028-31976 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:11:05.896914   32069 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:11:06.076161   32069 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2021-11-18 00:11:06.019257028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:11:06.078160   32069 cni.go:93] Creating CNI manager for ""
	I1117 16:11:06.078178   32069 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 16:11:06.078185   32069 start_flags.go:282] config:
	{Name:download-only-20211117161028-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:download-only-20211117161028-31976 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:11:06.105370   32069 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 16:11:06.132097   32069 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 16:11:06.132107   32069 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 16:11:06.207632   32069 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 16:11:06.207660   32069 cache.go:57] Caching tarball of preloaded images
	I1117 16:11:06.207856   32069 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 16:11:06.233848   32069 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 16:11:06.250554   32069 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 16:11:06.250569   32069 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 16:11:06.326519   32069 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8bc3d17fd8aad78343e2b84f0cac75d1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117161028-31976"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.128094118s)
--- PASS: TestDownloadOnly/DeleteAll (1.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20211117161028-31976
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.65s)

                                                
                                    
x
+
TestDownloadOnlyKic (8.47s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20211117161117-31976 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20211117161117-31976 --force --alsologtostderr --driver=docker : (6.925623048s)
helpers_test.go:175: Cleaning up "download-docker-20211117161117-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20211117161117-31976
--- PASS: TestDownloadOnlyKic (8.47s)

                                                
                                    
x
+
TestOffline (120.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20211117170334-31976 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20211117170334-31976 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m44.829907922s)
helpers_test.go:175: Cleaning up "offline-docker-20211117170334-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20211117170334-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20211117170334-31976: (15.786872174s)
--- PASS: TestOffline (120.62s)

                                                
                                    
x
+
TestAddons/Setup (228.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20211117161126-31976 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20211117161126-31976 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m48.524786806s)
--- PASS: TestAddons/Setup (228.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 2.345704ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-77c99ccb96-2fvk5" [43748d20-b797-45bc-b010-f2d9fe4b42d5] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007651795s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20211117161126-31976 top pods -n kube-system
addons_test.go:383: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 12.017178ms
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-64b546c44b-jhkxg" [37119acb-dbaa-4365-bfbc-3ebff4f66c64] Running
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014167175s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Run:  kubectl --context addons-20211117161126-31976 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20211117161126-31976 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.824089031s)
addons_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.44s)

                                                
                                    
x
+
TestAddons/parallel/Olm (51.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20211117161126-31976 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s
addons_test.go:456: catalog-operator stabilized in 58.744977ms
addons_test.go:458: (dbg) Run:  kubectl --context addons-20211117161126-31976 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s
addons_test.go:461: olm-operator stabilized in 115.777995ms
addons_test.go:463: (dbg) Run:  kubectl --context addons-20211117161126-31976 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:466: packageserver stabilized in 176.550728ms
addons_test.go:468: (dbg) Run:  kubectl --context addons-20211117161126-31976 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:471: operatorhubio-catalog stabilized in 232.116504ms
addons_test.go:474: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/etcd.yaml
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117161126-31976 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117161126-31976 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117161126-31976 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117161126-31976 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (51.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 20.140406ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117161126-31976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117161126-31976 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [c3546686-e951-476e-ad8f-bb273da7ff26] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [c3546686-e951-476e-ad8f-bb273da7ff26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [c3546686-e951-476e-ad8f-bb273da7ff26] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 35.010017905s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20211117161126-31976 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20211117161126-31976 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20211117161126-31976 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20211117161126-31976 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117161126-31976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [380bc173-fe6c-43db-a3f5-31f9428e237d] Pending
helpers_test.go:342: "task-pv-pod-restore" [380bc173-fe6c-43db-a3f5-31f9428e237d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [380bc173-fe6c-43db-a3f5-31f9428e237d] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.015316665s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20211117161126-31976 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20211117161126-31976 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20211117161126-31976 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.094719565s)
addons_test.go:593: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (16.04s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20211117161126-31976 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [9251f994-725d-4f6d-ac92-64fab830b2f7] Pending
helpers_test.go:342: "busybox" [9251f994-725d-4f6d-ac92-64fab830b2f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [9251f994-725d-4f6d-ac92-64fab830b2f7] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.007676276s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20211117161126-31976 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20211117161126-31976 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20211117161126-31976 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-darwin-amd64 -p addons-20211117161126-31976 addons disable gcp-auth --alsologtostderr -v=1: (7.423539188s)
--- PASS: TestAddons/serial/GCPAuth (16.04s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20211117161126-31976
addons_test.go:133: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20211117161126-31976: (17.695745862s)
addons_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20211117161126-31976
addons_test.go:141: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20211117161126-31976
--- PASS: TestAddons/StoppedEnableDisable (18.17s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.96s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2475036121
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2475036121/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2475036121/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2475036121/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperKitDriverInstallOrUpdate (8.96s)

                                                
                                    
x
+
TestErrorSpam/setup (70.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20211117161714-31976 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 --driver=docker 
error_spam_test.go:79: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20211117161714-31976 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 --driver=docker : (1m10.631634325s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.22.3."
--- PASS: TestErrorSpam/setup (70.63s)

                                                
                                    
x
+
TestErrorSpam/start (2.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 start --dry-run
--- PASS: TestErrorSpam/start (2.36s)

                                                
                                    
x
+
TestErrorSpam/status (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 status
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 status
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 status
--- PASS: TestErrorSpam/status (1.96s)

                                                
                                    
x
+
TestErrorSpam/pause (2.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 pause
--- PASS: TestErrorSpam/pause (2.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.21s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 unpause
--- PASS: TestErrorSpam/unpause (2.21s)

                                                
                                    
x
+
TestErrorSpam/stop (18.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 stop
error_spam_test.go:157: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 stop: (17.340367993s)
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117161714-31976 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117161714-31976 stop
--- PASS: TestErrorSpam/stop (18.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1633: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/test/nested/copy/31976/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (123.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E1117 16:20:14.944118   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:14.953405   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:14.964930   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:14.986788   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:15.036972   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:15.120558   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:15.290789   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:15.611031   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:16.254911   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:17.537339   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:20.106012   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:25.226816   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:35.472223   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:20:55.953062   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
functional_test.go:2015: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m3.888439164s)
--- PASS: TestFunctional/serial/StartWithProxy (123.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:600: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --alsologtostderr -v=8
functional_test.go:600: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --alsologtostderr -v=8: (7.521147661s)
functional_test.go:604: soft start took 7.521650586s for "functional-20211117161858-31976" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:622: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:637: (dbg) Run:  kubectl --context functional-20211117161858-31976 get po -A
functional_test.go:637: (dbg) Done: kubectl --context functional-20211117161858-31976 get po -A: (1.667420262s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1014: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211117161858-31976 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117161858-319763063799674
functional_test.go:1026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add minikube-local-cache-test:functional-20211117161858-31976
functional_test.go:1026: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache add minikube-local-cache-test:functional-20211117161858-31976: (1.48800316s)
functional_test.go:1031: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cache delete minikube-local-cache-test:functional-20211117161858-31976
functional_test.go:1020: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211117161858-31976
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:657: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 kubectl -- --context functional-20211117161858-31976 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:682: (dbg) Run:  out/kubectl --context functional-20211117161858-31976 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs
functional_test.go:1173: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs: (3.018943071s)
--- PASS: TestFunctional/serial/LogsCmd (3.02s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1190: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117161858-31976720665043/logs.txt
functional_test.go:1190: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117161858-31976720665043/logs.txt: (3.189677426s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 config get cpus: exit status 14 (45.087444ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 config set cpus 2
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 config get cpus
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 config unset cpus
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 config get cpus: exit status 14 (40.901376ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:847: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117161858-31976 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:852: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117161858-31976 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 34751: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (768.169134ms)

                                                
                                                
-- stdout --
	* [functional-20211117161858-31976] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:23:17.044791   34675 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:23:17.045019   34675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:23:17.045025   34675 out.go:310] Setting ErrFile to fd 2...
	I1117 16:23:17.045028   34675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:23:17.045127   34675 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 16:23:17.045411   34675 out.go:304] Setting JSON to false
	I1117 16:23:17.073796   34675 start.go:112] hostinfo: {"hostname":"37310.local","uptime":8572,"bootTime":1637186425,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:23:17.073943   34675 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:23:17.101220   34675 out.go:176] * [functional-20211117161858-31976] minikube v1.24.0 on Darwin 11.2.3
	I1117 16:23:17.175631   34675 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 16:23:17.202056   34675 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 16:23:17.253755   34675 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 16:23:17.321788   34675 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 16:23:17.323002   34675 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:23:17.323897   34675 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 16:23:17.431622   34675 docker.go:132] docker version: linux-20.10.6
	I1117 16:23:17.431842   34675 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:23:17.641888   34675 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:27 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-18 00:23:17.562531395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:23:17.670514   34675 out.go:176] * Using the docker driver based on existing profile
	I1117 16:23:17.670538   34675 start.go:280] selected driver: docker
	I1117 16:23:17.670545   34675 start.go:775] validating driver "docker" against &{Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-pro
visioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:23:17.670638   34675 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 16:23:17.717583   34675 out.go:176] 
	W1117 16:23:17.717791   34675 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 16:23:17.764390   34675 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:929: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117161858-31976 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (770.244726ms)

                                                
                                                
-- stdout --
	* [functional-20211117161858-31976] minikube v1.24.0 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:23:16.579307   34658 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:23:16.579446   34658 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:23:16.579451   34658 out.go:310] Setting ErrFile to fd 2...
	I1117 16:23:16.579454   34658 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:23:16.579567   34658 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 16:23:16.579830   34658 out.go:304] Setting JSON to false
	I1117 16:23:16.607612   34658 start.go:112] hostinfo: {"hostname":"37310.local","uptime":8571,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 16:23:16.607707   34658 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 16:23:16.634051   34658 out.go:176] * [functional-20211117161858-31976] minikube v1.24.0 sur Darwin 11.2.3
	I1117 16:23:16.680729   34658 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 16:23:16.706786   34658 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 16:23:16.733560   34658 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 16:23:16.759749   34658 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 16:23:16.760434   34658 config.go:176] Loaded profile config "functional-20211117161858-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:23:16.761051   34658 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 16:23:16.868263   34658 docker.go:132] docker version: linux-20.10.6
	I1117 16:23:16.868428   34658 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 16:23:17.076286   34658 info.go:263] docker info: {ID:2AYQ:M3LP:F6GE:IPKK:4XGD:7ZUA:OVWC:XKGI:AARV:HKMV:IHBV:IRRJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:27 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-18 00:23:16.996937208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 16:23:17.175658   34658 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1117 16:23:17.175699   34658 start.go:280] selected driver: docker
	I1117 16:23:17.175708   34658 start.go:775] validating driver "docker" against &{Name:functional-20211117161858-31976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117161858-31976 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-pro
visioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 16:23:17.175782   34658 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 16:23:17.227869   34658 out.go:176] 
	W1117 16:23:17.227997   34658 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 16:23:17.279959   34658 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:814: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 addons list
functional_test.go:1494: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [9b9ccef0-fccc-43f7-8dec-952d07564964] Running
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008516276s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20211117161858-31976 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20211117161858-31976 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211117161858-31976 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211117161858-31976 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [4047deac-e653-433d-804a-0f25baf34914] Pending
helpers_test.go:342: "sp-pod" [4047deac-e653-433d-804a-0f25baf34914] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4047deac-e653-433d-804a-0f25baf34914] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.010152677s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20211117161858-31976 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20211117161858-31976 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211117161858-31976 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [2a2da24c-43c9-4958-9cb6-a27a2dcc8592] Pending
helpers_test.go:342: "sp-pod" [2a2da24c-43c9-4958-9cb6-a27a2dcc8592] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [2a2da24c-43c9-4958-9cb6-a27a2dcc8592] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.006373906s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20211117161858-31976 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "echo hello"
functional_test.go:1534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1571: (dbg) Run:  kubectl --context functional-20211117161858-31976 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1577: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-9bbbc5bbb-7f7qx" [54265370-df6e-41bc-9406-1d590596ae77] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-9bbbc5bbb-7f7qx" [54265370-df6e-41bc-9406-1d590596ae77] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1577: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.016980335s
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;": exit status 1 (142.991624ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;": exit status 1 (140.63638ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;": exit status 1 (138.053643ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117161858-31976 exec mysql-9bbbc5bbb-7f7qx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1707: Checking for existence of /etc/test/nested/copy/31976/hosts within VM
functional_test.go:1709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /etc/test/nested/copy/31976/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1714: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/31976.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /etc/ssl/certs/31976.pem"
functional_test.go:1750: Checking for existence of /usr/share/ca-certificates/31976.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /usr/share/ca-certificates/31976.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1777: Checking for existence of /etc/ssl/certs/319762.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /etc/ssl/certs/319762.pem"
functional_test.go:1777: Checking for existence of /usr/share/ca-certificates/319762.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /usr/share/ca-certificates/319762.pem"
functional_test.go:1777: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:213: (dbg) Run:  kubectl --context functional-20211117161858-31976 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo systemctl is-active crio"
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo systemctl is-active crio": exit status 1 (695.739852ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image ls
2021/11/17 16:23:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:246: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image ls:
k8s.gcr.io/pause:3.5
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20211117161858-31976
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageList (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh pgrep buildkitd: exit status 1 (704.040096ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image build -t localhost/my-image:functional-20211117161858-31976 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:271: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image build -t localhost/my-image:functional-20211117161858-31976 testdata/build: (2.950704512s)
functional_test.go:276: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image build -t localhost/my-image:functional-20211117161858-31976 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
e685c5c858e3: Pulling fs layer
e685c5c858e3: Verifying Checksum
e685c5c858e3: Download complete
e685c5c858e3: Pull complete
Digest: sha256:e7157b6d7ebbe2cce5eaa8cfe8aa4fa82d173999b9f90a9ec42e57323546c353
Status: Downloaded newer image for busybox:latest
---> 7138284460ff
Step 2/3 : RUN true
---> Running in 35f50014d6fc
Removing intermediate container 35f50014d6fc
---> bec6d2e46736
Step 3/3 : ADD content.txt /
---> ebde6e1bc13b
Successfully built ebde6e1bc13b
Successfully tagged localhost/my-image:functional-20211117161858-31976
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.08560384s)
functional_test.go:303: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:440: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117161858-31976 docker-env) && out/minikube-darwin-amd64 status -p functional-20211117161858-31976"
functional_test.go:440: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117161858-31976 docker-env) && out/minikube-darwin-amd64 status -p functional-20211117161858-31976": (1.533345316s)
functional_test.go:463: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117161858-31976 docker-env) && docker images"
functional_test.go:463: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117161858-31976 docker-env) && docker images": (1.006711909s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117161858-31976

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117161858-31976: (3.430149924s)
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image save gcr.io/google-containers/addon-resizer:functional-20211117161858-31976 /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image save gcr.io/google-containers/addon-resizer:functional-20211117161858-31976 /Users/jenkins/workspace/addon-resizer-save.tar: (1.567236026s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:333: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image rm gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.928402069s)
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:360: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
functional_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117161858-31976: (4.398997713s)
functional_test.go:370: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20211117161858-31976 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20211117161858-31976 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [57aea5f7-b2d5-4326-8e4d-cf62f072bd15] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [57aea5f7-b2d5-4326-8e4d-cf62f072bd15] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.00871383s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20211117161858-31976 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20211117161858-31976 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 34412: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1218: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1258: Took "833.925508ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1272: Took "79.996652ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:226: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211117161858-31976 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest2053224041:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (843.215315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:274: guest mount directory contents
total 0
functional_test_mount_test.go:276: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211117161858-31976 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest2053224041:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:277: reading mount text
functional_test_mount_test.go:291: done reading mount text
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh "sudo umount -f /mount-9p": exit status 1 (669.570193ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:245: "out/minikube-darwin-amd64 -p functional-20211117161858-31976 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:247: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211117161858-31976 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest2053224041:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Done: out/minikube-darwin-amd64 profile list -o json: (1.071244778s)
functional_test.go:1309: Took "1.071324736s" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1317: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1322: Took "99.979311ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2037: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117161858-31976 version -o=json --components: (1.196281207s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.29s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211117161858-31976
--- PASS: TestFunctional/delete_addon-resizer_images (0.29s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:192: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211117161858-31976
--- PASS: TestFunctional/delete_my-image_image (0.12s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:200: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211117161858-31976
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (131.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117162339-31976 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1117 16:25:14.930787   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:25:42.678564   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117162339-31976 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m11.083392376s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (131.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (22.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117162339-31976 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117162339-31976 addons enable ingress --alsologtostderr -v=5: (22.438636308s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (22.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117162339-31976 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (122.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20211117162713-31976 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1117 16:27:20.377222   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:20.382375   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:20.393565   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:20.414226   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:20.455088   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:20.543305   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:20.703456   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:21.023560   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:21.663819   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:22.953669   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:25.513995   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:30.640262   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:27:40.890527   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:28:01.371159   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:28:42.335895   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20211117162713-31976 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m2.936051738s)
--- PASS: TestJSONOutput/start/Command (122.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.88s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20211117162713-31976 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.88s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20211117162713-31976 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20211117162713-31976 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20211117162713-31976 --output=json --user=testUser: (18.233320464s)
--- PASS: TestJSONOutput/stop/Command (18.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20211117162942-31976 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20211117162942-31976 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (125.284128ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"72c8efde-0c53-4d95-889a-7a1649558a14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211117162942-31976] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee0db72e-c242-4b59-8514-3a0fdc27b067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"fea7c6c7-189e-44bc-87dd-71f8c7ecb79d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig"}}
	{"specversion":"1.0","id":"1f185a0f-472b-453f-af19-347d05e669fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"cfa6e62a-4c7e-4886-a282-ee6bc801e55b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube"}}
	{"specversion":"1.0","id":"620035db-e739-4deb-bc6b-db3fa0fe9734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20211117162942-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20211117162942-31976
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (86.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211117162943-31976 --network=
E1117 16:30:04.267047   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:30:14.937790   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211117162943-31976 --network=: (1m13.659868566s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211117162943-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211117162943-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211117162943-31976: (13.126869429s)
--- PASS: TestKicCustomNetwork/create_custom_network (86.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (70.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211117163110-31976 --network=bridge
E1117 16:31:14.127290   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.132414   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.142661   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.168298   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.218311   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.298921   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.468252   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:14.789012   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:15.429156   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:16.717581   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:19.277844   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:24.399192   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:34.649468   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:31:55.130839   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211117163110-31976 --network=bridge: (1m0.667836725s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211117163110-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211117163110-31976
E1117 16:32:20.384637   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211117163110-31976: (9.7426823s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (70.53s)

                                                
                                    
x
+
TestKicExistingNetwork (86.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20211117163226-31976 --network=existing-network
E1117 16:32:36.101889   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:32:48.119668   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20211117163226-31976 --network=existing-network: (1m7.477093195s)
helpers_test.go:175: Cleaning up "existing-network-20211117163226-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20211117163226-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20211117163226-31976: (13.622968352s)
--- PASS: TestKicExistingNetwork (86.77s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (70.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20211117163347-31976 --memory=2048 --mount --driver=docker 
E1117 16:33:58.026270   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
mount_start_test.go:77: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20211117163347-31976 --memory=2048 --mount --driver=docker : (1m10.763496975s)
--- PASS: TestMountStart/serial/StartWithMountFirst (70.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (60.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211117163347-31976 --memory=2048 --mount --driver=docker 
E1117 16:35:14.945518   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
mount_start_test.go:77: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20211117163347-31976 --memory=2048 --mount --driver=docker : (1m0.636725975s)
--- PASS: TestMountStart/serial/StartWithMountSecond (60.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20211117163347-31976 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.64s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117163347-31976 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.64s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (12.22s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20211117163347-31976 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20211117163347-31976 --alsologtostderr -v=5: (12.223262734s)
--- PASS: TestMountStart/serial/DeleteFirst (12.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117163347-31976 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.66s)

                                                
                                    
x
+
TestMountStart/serial/Stop (17.89s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20211117163347-31976
E1117 16:36:14.132942   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20211117163347-31976: (17.888030497s)
--- PASS: TestMountStart/serial/Stop (17.89s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (48.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211117163347-31976
E1117 16:36:38.064733   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:36:41.873456   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
mount_start_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20211117163347-31976: (48.601393865s)
--- PASS: TestMountStart/serial/RestartStopped (48.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117163347-31976 ssh ls /minikube-host
E1117 16:37:20.393919   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (218.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117163734-31976 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1117 16:40:14.937310   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
multinode_test.go:82: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211117163734-31976 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m37.057323294s)
multinode_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr
multinode_test.go:88: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr: (1.100653513s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (218.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:463: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.828572481s)
multinode_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- rollout status deployment/busybox
E1117 16:41:14.120470   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
multinode_test.go:468: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- rollout status deployment/busybox: (3.237384336s)
multinode_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:494: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-55jx9 -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-g894g -- nslookup kubernetes.io
multinode_test.go:504: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-55jx9 -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-g894g -- nslookup kubernetes.default
multinode_test.go:512: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-55jx9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-g894g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:530: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-55jx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-55jx9 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:530: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-g894g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117163734-31976 -- exec busybox-84b6686758-g894g -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (107.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211117163734-31976 -v 3 --alsologtostderr
E1117 16:42:20.382150   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
multinode_test.go:107: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20211117163734-31976 -v 3 --alsologtostderr: (1m45.961056276s)
multinode_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr
multinode_test.go:113: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr: (1.563874067s)
--- PASS: TestMultiNode/serial/AddNode (107.53s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --output json --alsologtostderr
multinode_test.go:170: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --output json --alsologtostderr: (1.557212143s)
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 cp testdata/cp-test.txt multinode-20211117163734-31976-m02:/home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 ssh -n multinode-20211117163734-31976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 cp testdata/cp-test.txt multinode-20211117163734-31976-m03:/home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 ssh -n multinode-20211117163734-31976-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 node stop m03
multinode_test.go:192: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 node stop m03: (9.451434203s)
multinode_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status: exit status 7 (1.294669659s)

                                                
                                                
-- stdout --
	multinode-20211117163734-31976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211117163734-31976-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211117163734-31976-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr: exit status 7 (1.223618245s)

                                                
                                                
-- stdout --
	multinode-20211117163734-31976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211117163734-31976-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211117163734-31976-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:43:24.012452   38002 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:43:24.012576   38002 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:43:24.012581   38002 out.go:310] Setting ErrFile to fd 2...
	I1117 16:43:24.012584   38002 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:43:24.012666   38002 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 16:43:24.012843   38002 out.go:304] Setting JSON to false
	I1117 16:43:24.012857   38002 mustload.go:65] Loading cluster: multinode-20211117163734-31976
	I1117 16:43:24.013102   38002 config.go:176] Loaded profile config "multinode-20211117163734-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:43:24.013114   38002 status.go:253] checking status of multinode-20211117163734-31976 ...
	I1117 16:43:24.013452   38002 cli_runner.go:115] Run: docker container inspect multinode-20211117163734-31976 --format={{.State.Status}}
	I1117 16:43:24.134185   38002 status.go:328] multinode-20211117163734-31976 host status = "Running" (err=<nil>)
	I1117 16:43:24.134211   38002 host.go:66] Checking if "multinode-20211117163734-31976" exists ...
	I1117 16:43:24.134505   38002 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211117163734-31976
	I1117 16:43:24.254097   38002 host.go:66] Checking if "multinode-20211117163734-31976" exists ...
	I1117 16:43:24.254387   38002 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:43:24.254461   38002 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117163734-31976
	I1117 16:43:24.373425   38002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59462 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/multinode-20211117163734-31976/id_rsa Username:docker}
	I1117 16:43:24.454918   38002 ssh_runner.go:152] Run: systemctl --version
	I1117 16:43:24.459547   38002 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:43:24.469584   38002 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20211117163734-31976
	I1117 16:43:24.588950   38002 kubeconfig.go:92] found "multinode-20211117163734-31976" server: "https://127.0.0.1:59466"
	I1117 16:43:24.588971   38002 api_server.go:165] Checking apiserver status ...
	I1117 16:43:24.589014   38002 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 16:43:24.604215   38002 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/2019/cgroup
	I1117 16:43:24.612113   38002 api_server.go:181] apiserver freezer: "7:freezer:/docker/670d6aed76a19a8b6d4a873ed18775654213e94b78a8e6900ecdf9e9c6e3fdaa/kubepods/burstable/podf3abd7b7290fbc32408bd7b6081c67e5/4d8938436d1982dc160ef57b469b53f794338d4582803e4d50c1c4c192f7fadf"
	I1117 16:43:24.612171   38002 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/670d6aed76a19a8b6d4a873ed18775654213e94b78a8e6900ecdf9e9c6e3fdaa/kubepods/burstable/podf3abd7b7290fbc32408bd7b6081c67e5/4d8938436d1982dc160ef57b469b53f794338d4582803e4d50c1c4c192f7fadf/freezer.state
	I1117 16:43:24.619428   38002 api_server.go:203] freezer state: "THAWED"
	I1117 16:43:24.619448   38002 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59466/healthz ...
	I1117 16:43:24.624957   38002 api_server.go:266] https://127.0.0.1:59466/healthz returned 200:
	ok
	I1117 16:43:24.624968   38002 status.go:419] multinode-20211117163734-31976 apiserver status = Running (err=<nil>)
	I1117 16:43:24.624977   38002 status.go:255] multinode-20211117163734-31976 status: &{Name:multinode-20211117163734-31976 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:43:24.625000   38002 status.go:253] checking status of multinode-20211117163734-31976-m02 ...
	I1117 16:43:24.625307   38002 cli_runner.go:115] Run: docker container inspect multinode-20211117163734-31976-m02 --format={{.State.Status}}
	I1117 16:43:24.745560   38002 status.go:328] multinode-20211117163734-31976-m02 host status = "Running" (err=<nil>)
	I1117 16:43:24.745581   38002 host.go:66] Checking if "multinode-20211117163734-31976-m02" exists ...
	I1117 16:43:24.745852   38002 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211117163734-31976-m02
	I1117 16:43:24.864720   38002 host.go:66] Checking if "multinode-20211117163734-31976-m02" exists ...
	I1117 16:43:24.864981   38002 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:43:24.865039   38002 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117163734-31976-m02
	I1117 16:43:24.986164   38002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59803 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/multinode-20211117163734-31976-m02/id_rsa Username:docker}
	I1117 16:43:25.066255   38002 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:43:25.075530   38002 status.go:255] multinode-20211117163734-31976-m02 status: &{Name:multinode-20211117163734-31976-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:43:25.075557   38002 status.go:253] checking status of multinode-20211117163734-31976-m03 ...
	I1117 16:43:25.075864   38002 cli_runner.go:115] Run: docker container inspect multinode-20211117163734-31976-m03 --format={{.State.Status}}
	I1117 16:43:25.195000   38002 status.go:328] multinode-20211117163734-31976-m03 host status = "Stopped" (err=<nil>)
	I1117 16:43:25.195020   38002 status.go:341] host is not running, skipping remaining checks
	I1117 16:43:25.195026   38002 status.go:255] multinode-20211117163734-31976-m03 status: &{Name:multinode-20211117163734-31976-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (11.97s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 node start m03 --alsologtostderr
E1117 16:43:43.478625   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
multinode_test.go:236: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 node start m03 --alsologtostderr: (51.823361475s)
multinode_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status
multinode_test.go:243: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status: (1.568916442s)
multinode_test.go:257: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (53.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (249.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117163734-31976
multinode_test.go:272: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20211117163734-31976
multinode_test.go:272: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20211117163734-31976: (40.122004066s)
multinode_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117163734-31976 --wait=true -v=8 --alsologtostderr
E1117 16:45:14.942180   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:46:14.127947   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
E1117 16:47:20.388816   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
E1117 16:47:37.231679   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
multinode_test.go:277: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211117163734-31976 --wait=true -v=8 --alsologtostderr: (3m29.604865519s)
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117163734-31976
--- PASS: TestMultiNode/serial/RestartKeepsNodes (249.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (17.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 node delete m03
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 node delete m03: (14.525876945s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr: (1.095060251s)
multinode_test.go:396: (dbg) Run:  docker volume ls
multinode_test.go:406: (dbg) Run:  kubectl get nodes
multinode_test.go:406: (dbg) Done: kubectl get nodes: (1.629802476s)
multinode_test.go:414: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (17.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (35.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 stop
multinode_test.go:296: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 stop: (35.094489878s)
multinode_test.go:302: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status: exit status 7 (270.625595ms)

                                                
                                                
-- stdout --
	multinode-20211117163734-31976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211117163734-31976-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:309: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr: exit status 7 (274.403237ms)

                                                
                                                
-- stdout --
	multinode-20211117163734-31976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211117163734-31976-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:49:21.400245   38824 out.go:297] Setting OutFile to fd 1 ...
	I1117 16:49:21.400418   38824 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:49:21.400423   38824 out.go:310] Setting ErrFile to fd 2...
	I1117 16:49:21.400426   38824 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 16:49:21.400494   38824 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 16:49:21.400660   38824 out.go:304] Setting JSON to false
	I1117 16:49:21.400674   38824 mustload.go:65] Loading cluster: multinode-20211117163734-31976
	I1117 16:49:21.400907   38824 config.go:176] Loaded profile config "multinode-20211117163734-31976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 16:49:21.400919   38824 status.go:253] checking status of multinode-20211117163734-31976 ...
	I1117 16:49:21.401259   38824 cli_runner.go:115] Run: docker container inspect multinode-20211117163734-31976 --format={{.State.Status}}
	I1117 16:49:21.517915   38824 status.go:328] multinode-20211117163734-31976 host status = "Stopped" (err=<nil>)
	I1117 16:49:21.517941   38824 status.go:341] host is not running, skipping remaining checks
	I1117 16:49:21.517949   38824 status.go:255] multinode-20211117163734-31976 status: &{Name:multinode-20211117163734-31976 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:49:21.517978   38824 status.go:253] checking status of multinode-20211117163734-31976-m02 ...
	I1117 16:49:21.518294   38824 cli_runner.go:115] Run: docker container inspect multinode-20211117163734-31976-m02 --format={{.State.Status}}
	I1117 16:49:21.631730   38824 status.go:328] multinode-20211117163734-31976-m02 host status = "Stopped" (err=<nil>)
	I1117 16:49:21.631758   38824 status.go:341] host is not running, skipping remaining checks
	I1117 16:49:21.631765   38824 status.go:255] multinode-20211117163734-31976-m02 status: &{Name:multinode-20211117163734-31976-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (35.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (150.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117163734-31976 --wait=true -v=8 --alsologtostderr --driver=docker 
E1117 16:50:14.949463   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:51:14.136426   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
multinode_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211117163734-31976 --wait=true -v=8 --alsologtostderr --driver=docker : (2m27.809641173s)
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211117163734-31976 status --alsologtostderr: (1.133978008s)
multinode_test.go:356: (dbg) Run:  kubectl get nodes
multinode_test.go:356: (dbg) Done: kubectl get nodes: (1.633961231s)
multinode_test.go:364: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (150.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (95.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117163734-31976
multinode_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117163734-31976-m02 --driver=docker 
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117163734-31976-m02 --driver=docker : exit status 14 (316.215785ms)

                                                
                                                
-- stdout --
	* [multinode-20211117163734-31976-m02] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20211117163734-31976-m02' is duplicated with machine name 'multinode-20211117163734-31976-m02' in profile 'multinode-20211117163734-31976'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117163734-31976-m03 --driver=docker 
E1117 16:52:20.400049   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
multinode_test.go:442: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211117163734-31976-m03 --driver=docker : (1m18.5814641s)
multinode_test.go:449: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211117163734-31976
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20211117163734-31976: exit status 80 (595.152573ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20211117163734-31976
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20211117163734-31976-m03 already exists in multinode-20211117163734-31976-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20211117163734-31976-m03
E1117 16:53:18.063144   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
multinode_test.go:454: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20211117163734-31976-m03: (15.914378752s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (95.45s)

                                                
                                    
x
+
TestPreload (239.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20211117165351-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E1117 16:55:14.938747   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
E1117 16:56:14.129954   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20211117165351-31976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m48.286808534s)
preload_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20211117165351-31976 -- docker pull busybox
preload_test.go:62: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20211117165351-31976 -- docker pull busybox: (3.248703209s)
preload_test.go:72: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20211117165351-31976 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
E1117 16:57:20.387565   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20211117165351-31976 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (53.511689356s)
preload_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20211117165351-31976 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20211117165351-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20211117165351-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20211117165351-31976: (13.640416871s)
--- PASS: TestPreload (239.36s)

                                                
                                    
x
+
TestScheduledStopUnix (154.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20211117165750-31976 --memory=2048 --driver=docker 
scheduled_stop_test.go:129: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20211117165750-31976 --memory=2048 --driver=docker : (1m14.939356424s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211117165750-31976 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20211117165750-31976 -n scheduled-stop-20211117165750-31976
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211117165750-31976 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211117165750-31976 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117165750-31976 -n scheduled-stop-20211117165750-31976
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20211117165750-31976
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211117165750-31976 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E1117 17:00:14.946658   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20211117165750-31976
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20211117165750-31976: exit status 7 (153.230693ms)

                                                
                                                
-- stdout --
	scheduled-stop-20211117165750-31976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117165750-31976 -n scheduled-stop-20211117165750-31976
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117165750-31976 -n scheduled-stop-20211117165750-31976: exit status 7 (192.361996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20211117165750-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20211117165750-31976
E1117 17:00:23.483960   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20211117165750-31976: (6.564376025s)
--- PASS: TestScheduledStopUnix (154.13s)

                                                
                                    
x
+
TestSkaffold (127.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2902105507 version
skaffold_test.go:61: skaffold version: v1.35.0
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20211117170024-31976 --memory=2600 --driver=docker 
E1117 17:01:14.130701   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20211117170024-31976 --memory=2600 --driver=docker : (1m13.15437098s)
skaffold_test.go:84: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:108: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2902105507 run --minikube-profile skaffold-20211117170024-31976 --kube-context skaffold-20211117170024-31976 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2902105507 run --minikube-profile skaffold-20211117170024-31976 --kube-context skaffold-20211117170024-31976 --status-check=true --port-forward=false --interactive=false: (28.955973058s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-f8b75d55d-v4llf" [1ad4438d-7c33-48ed-9e00-a1d61b44adf3] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013061842s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-68c45fb68f-s4lqg" [a50d4d65-b944-4da1-8583-8f8cf27f06ad] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007201085s
helpers_test.go:175: Cleaning up "skaffold-20211117170024-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20211117170024-31976
E1117 17:02:20.392472   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117161858-31976/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20211117170024-31976: (13.936864279s)
--- PASS: TestSkaffold (127.63s)

                                                
                                    
x
+
TestInsufficientStorage (62.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20211117170232-31976 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20211117170232-31976 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (49.152942456s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36cdecc5-7778-481e-a44b-fb8474db44db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211117170232-31976] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bbdcc9b-281a-41b3-84c3-f72f41d2781d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"04e91e78-412c-4bce-a47f-8c127bb41f0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig"}}
	{"specversion":"1.0","id":"14feecfd-0808-4484-979d-5ccb2e5c8722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"417fa271-24b4-45c2-8c60-b03099e230dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube"}}
	{"specversion":"1.0","id":"cbca21da-8bf6-4958-9d2a-badcea235d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ad0b60e4-3b3b-498e-b8ae-5d450c4a0169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e4aa7e1-a446-4fdd-84dc-f6439b9a511a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211117170232-31976 in cluster insufficient-storage-20211117170232-31976","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa45784a-72fa-479f-b860-24ab457205fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"194c3b13-056d-4262-952b-1254f8a30c75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d32af9ba-b8cf-4911-9a84-438401c40a35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20211117170232-31976 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20211117170232-31976 --output=json --layout=cluster: exit status 7 (601.087969ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211117170232-31976","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211117170232-31976","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 17:03:22.300303   40772 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211117170232-31976" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20211117170232-31976 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20211117170232-31976 --output=json --layout=cluster: exit status 7 (590.645081ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211117170232-31976","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211117170232-31976","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 17:03:22.891912   40789 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211117170232-31976" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	E1117 17:03:22.902442   40789 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/insufficient-storage-20211117170232-31976/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20211117170232-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20211117170232-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20211117170232-31976: (11.92336292s)
--- PASS: TestInsufficientStorage (62.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3493295732.exe start -p missing-upgrade-20211117170335-31976 --memory=2200 --driver=docker 
E1117 17:04:17.239519   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117162339-31976/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3493295732.exe start -p missing-upgrade-20211117170335-31976 --memory=2200 --driver=docker : (1m13.59098197s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20211117170335-31976
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20211117170335-31976: (11.391400231s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20211117170335-31976
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20211117170335-31976 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E1117 17:05:14.947183   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20211117170335-31976 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m15.604504641s)
helpers_test.go:175: Cleaning up "missing-upgrade-20211117170335-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20211117170335-31976
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20211117170335-31976: (16.039087527s)
--- PASS: TestMissingContainerUpgrade (177.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20211117171027-31976 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117171033-31976 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117171033-31976 "sudo systemctl is-active --quiet service kubelet": exit status 85 (97.097766ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117171033-31976 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117171033-31976 "sudo systemctl is-active --quiet service kubelet": exit status 85 (95.853383ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117171033-31976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117171033-31976"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.10s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.56s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.56s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/245)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 14.055231ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-fbwl5" [dbd2d518-5369-4f43-b417-0e5202d14b93] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013967337s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-c2b66" [7af01d6e-b8ef-451d-8934-06747a77a062] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013762241s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20211117161126-31976 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20211117161126-31976 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20211117161126-31976 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.124228935s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.24s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20211117161126-31976 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20211117161126-31976 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20211117161126-31976 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a098a29c-71be-4047-ab10-5739f3f5dd64] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [a098a29c-71be-4047-ab10-5739f3f5dd64] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007050584s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211117161126-31976 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Run:  kubectl --context functional-20211117161858-31976 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1378: (dbg) Run:  kubectl --context functional-20211117161858-31976 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1383: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-6cbfcd7cbc-rpmdt" [61eab475-dc5e-4cec-bfec-b36bee7473f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1117 16:22:58.832366   31976 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-30388-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117161126-31976/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-6cbfcd7cbc-rpmdt" [61eab475-dc5e-4cec-bfec-b36bee7473f6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1383: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 14.015905229s
functional_test.go:1388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117161858-31976 service list
functional_test.go:1397: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:491: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:62: Skipping until https://github.com/kubernetes/minikube/issues/12301 is resolved.
--- SKIP: TestFunctional/parallel/MountCmd/any-port (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (45.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20211117162339-31976 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.173441952s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (204.436971ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.100.70.38:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (142.709148ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.100.70.38:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (158.600162ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.100.70.38:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (162.420048ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.100.70.38:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (158.154391ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.100.70.38:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (163.249732ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.100.70.38:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117162339-31976 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [d9ce333d-ee2b-4869-8b27-e422d66bd5d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [d9ce333d-ee2b-4869-8b27-e422d66bd5d1] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.017021354s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117162339-31976 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (45.33s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20211117170334-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20211117170334-31976
--- SKIP: TestNetworkPlugins/group/flannel (0.80s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20211117171128-31976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20211117171128-31976
--- SKIP: TestStartStop/group/disable-driver-mounts (0.67s)

                                                
                                    
Copied to clipboard