Test Report: Docker_macOS 13251

                    
                      c4800a61159ffc3ce43d26d0a2acbbe0889dab73:2022-01-26:22409
                    
                

Test fail (14/275)

x
+
TestDownloadOnly/v1.23.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/cached-images
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.2" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.2: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.2" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.2: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.2" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.2: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.2" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.2: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.6" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.6: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/binaries
aaa_download_only_test.go:149: expected the file for binary exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubelet" but got error stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubelet: no such file or directory
aaa_download_only_test.go:149: expected the file for binary exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubeadm" but got error stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubeadm: no such file or directory
aaa_download_only_test.go:149: expected the file for binary exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubectl" but got error stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/cached-images
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.3-rc.0" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.3-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.3-rc.0" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.3-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.3-rc.0" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.3-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.3-rc.0" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.3-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.6" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.6: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6" but got error: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.3-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/binaries
aaa_download_only_test.go:149: expected the file for binary exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.3-rc.0/kubelet" but got error stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.3-rc.0/kubelet: no such file or directory
aaa_download_only_test.go:149: expected the file for binary exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.3-rc.0/kubeadm" but got error stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.3-rc.0/kubeadm: no such file or directory
aaa_download_only_test.go:149: expected the file for binary exist at "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.3-rc.0/kubectl" but got error stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.3-rc.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.3-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220126184239-2083 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:230: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220126184239-2083 --force --alsologtostderr --driver=docker : (6.268276776s)
aaa_download_only_test.go:238: failed to read tarball file "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2--overlay2-amd64.tar.lz4": open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2--overlay2-amd64.tar.lz4: no such file or directory
aaa_download_only_test.go:248: failed to read checksum file "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2--overlay2-amd64.tar.lz4.checksum" : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2--overlay2-amd64.tar.lz4.checksum: no such file or directory
aaa_download_only_test.go:251: failed to verify checksum. checksum of "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2--overlay2-amd64.tar.lz4" does not match remote checksum ("" != "\xd4\x1d\x8cُ\x00\xb2\x04\xe9\x80\t\x98\xec\xf8B~")
helpers_test.go:176: Cleaning up "download-docker-20220126184239-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220126184239-2083
--- FAIL: TestDownloadOnlyKic (7.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo  rmi k8s.gcr.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo  rmi k8s.gcr.io/pause:latest: exit status 1 (592.00192ms)

                                                
                                                
-- stdout --
	sudo: rmi: command not found

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo  rmi k8s.gcr.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1151: expected an error  but got no error. image should not exist. ! cmd: "out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo crictl inspecti k8s.gcr.io/pause:latest"
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1249: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20220126184901-2083807714701/logs.txt
functional_test.go:1249: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20220126184901-2083807714701/logs.txt: (2.513979646s)
functional_test.go:1254: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0126 18:52:45.282524    4212 logs.go:192] command /bin/bash -c "docker logs --tail 60 6578b9d666e6" failed with error: /bin/bash -c "docker logs --tail 60 6578b9d666e6": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 6578b9d666e6
	 output: "\n** stderr ** \nError: No such container: 6578b9d666e6\n\n** /stderr **"
	! unable to fetch logs for: coredns [6578b9d666e6]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (2.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (60.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220126194239-2083 --memory=2048 --alsologtostderr --cni=false --driver=docker 
E0126 19:42:54.352816    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false
net_test.go:214: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220126194239-2083 --memory=2048 --alsologtostderr --cni=false --driver=docker : (40.051630205s)
net_test.go:216: out/minikube-darwin-amd64 start -p false-20220126194239-2083 --memory=2048 --alsologtostderr --cni=false --driver=docker  expected to fail
net_test.go:219: Expected 14 exit code, got 0
net_test.go:223: Expected "The \"\" container runtime requires CNI" line not found in output 
-- stdout --
	* [false-20220126194239-2083] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node false-20220126194239-2083 in cluster false-20220126194239-2083
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	  - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	* Done! kubectl is now configured to use "false-20220126194239-2083" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0126 19:42:39.262397   16187 out.go:297] Setting OutFile to fd 1 ...
	I0126 19:42:39.262523   16187 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:42:39.262528   16187 out.go:310] Setting ErrFile to fd 2...
	I0126 19:42:39.262532   16187 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:42:39.262606   16187 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 19:42:39.262923   16187 out.go:304] Setting JSON to false
	I0126 19:42:39.288616   16187 start.go:112] hostinfo: {"hostname":"37309.local","uptime":4334,"bootTime":1643250625,"procs":334,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 19:42:39.288712   16187 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 19:42:39.314815   16187 out.go:176] * [false-20220126194239-2083] minikube v1.25.1 on Darwin 11.2.3
	I0126 19:42:39.314909   16187 notify.go:174] Checking for updates...
	I0126 19:42:39.361910   16187 out.go:176]   - MINIKUBE_LOCATION=13251
	I0126 19:42:39.387785   16187 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 19:42:39.413891   16187 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0126 19:42:39.439887   16187 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0126 19:42:39.465765   16187 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0126 19:42:39.466262   16187 config.go:176] Loaded profile config "force-systemd-env-20220126194210-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:42:39.466321   16187 driver.go:344] Setting default libvirt URI to qemu:///system
	I0126 19:42:39.569121   16187 docker.go:132] docker version: linux-20.10.6
	I0126 19:42:39.569242   16187 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 19:42:39.762459   16187 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-27 03:42:39.689213479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 19:42:39.789259   16187 out.go:176] * Using the docker driver based on user configuration
	I0126 19:42:39.789284   16187 start.go:281] selected driver: docker
	I0126 19:42:39.789292   16187 start.go:798] validating driver "docker" against <nil>
	I0126 19:42:39.789313   16187 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0126 19:42:39.791657   16187 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 19:42:39.980931   16187 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-27 03:42:39.909650302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 19:42:39.981128   16187 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0126 19:42:39.981298   16187 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0126 19:42:39.981315   16187 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0126 19:42:39.981335   16187 cni.go:93] Creating CNI manager for "false"
	I0126 19:42:39.981345   16187 start_flags.go:302] config:
	{Name:false-20220126194239-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 19:42:40.028570   16187 out.go:176] * Starting control plane node false-20220126194239-2083 in cluster false-20220126194239-2083
	I0126 19:42:40.028613   16187 cache.go:120] Beginning downloading kic base image for docker with docker
	I0126 19:42:40.075353   16187 out.go:176] * Pulling base image ...
	I0126 19:42:40.075397   16187 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:42:40.075447   16187 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0126 19:42:40.075456   16187 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0126 19:42:40.075470   16187 cache.go:57] Caching tarball of preloaded images
	I0126 19:42:40.075577   16187 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0126 19:42:40.075594   16187 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0126 19:42:40.076089   16187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/config.json ...
	I0126 19:42:40.076181   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/config.json: {Name:mk31963e8dbfb6a0d0b2d9f061bef6876da7befc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:40.191240   16187 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0126 19:42:40.191262   16187 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0126 19:42:40.191271   16187 cache.go:208] Successfully downloaded all kic artifacts
	I0126 19:42:40.191336   16187 start.go:313] acquiring machines lock for false-20220126194239-2083: {Name:mk198c49e42e95e9e77c9ad201f40492a321a0bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 19:42:40.191479   16187 start.go:317] acquired machines lock for "false-20220126194239-2083" in 131.519µs
	I0126 19:42:40.191508   16187 start.go:89] Provisioning new machine with config: &{Name:false-20220126194239-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:do
cker ControlPlane:true Worker:true}
	I0126 19:42:40.191570   16187 start.go:126] createHost starting for "" (driver="docker")
	I0126 19:42:40.218399   16187 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0126 19:42:40.218592   16187 start.go:160] libmachine.API.Create for "false-20220126194239-2083" (driver="docker")
	I0126 19:42:40.218621   16187 client.go:168] LocalClient.Create starting
	I0126 19:42:40.218725   16187 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem
	I0126 19:42:40.244074   16187 main.go:130] libmachine: Decoding PEM data...
	I0126 19:42:40.244117   16187 main.go:130] libmachine: Parsing certificate...
	I0126 19:42:40.244252   16187 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem
	I0126 19:42:40.244326   16187 main.go:130] libmachine: Decoding PEM data...
	I0126 19:42:40.244346   16187 main.go:130] libmachine: Parsing certificate...
	I0126 19:42:40.245123   16187 cli_runner.go:133] Run: docker network inspect false-20220126194239-2083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0126 19:42:40.357964   16187 cli_runner.go:180] docker network inspect false-20220126194239-2083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0126 19:42:40.358071   16187 network_create.go:254] running [docker network inspect false-20220126194239-2083] to gather additional debugging logs...
	I0126 19:42:40.358089   16187 cli_runner.go:133] Run: docker network inspect false-20220126194239-2083
	W0126 19:42:40.475944   16187 cli_runner.go:180] docker network inspect false-20220126194239-2083 returned with exit code 1
	I0126 19:42:40.475968   16187 network_create.go:257] error running [docker network inspect false-20220126194239-2083]: docker network inspect false-20220126194239-2083: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220126194239-2083
	I0126 19:42:40.475981   16187 network_create.go:259] output of [docker network inspect false-20220126194239-2083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220126194239-2083
	
	** /stderr **
	I0126 19:42:40.476079   16187 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0126 19:42:40.590637   16187 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00043a160] misses:0}
	I0126 19:42:40.590675   16187 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0126 19:42:40.590692   16187 network_create.go:106] attempt to create docker network false-20220126194239-2083 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0126 19:42:40.590776   16187 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220126194239-2083
	I0126 19:42:41.467722   16187 network_create.go:90] docker network false-20220126194239-2083 192.168.49.0/24 created
	I0126 19:42:41.467757   16187 kic.go:106] calculated static IP "192.168.49.2" for the "false-20220126194239-2083" container
	I0126 19:42:41.467883   16187 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0126 19:42:41.580361   16187 cli_runner.go:133] Run: docker volume create false-20220126194239-2083 --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --label created_by.minikube.sigs.k8s.io=true
	I0126 19:42:41.693353   16187 oci.go:102] Successfully created a docker volume false-20220126194239-2083
	I0126 19:42:41.693482   16187 cli_runner.go:133] Run: docker run --rm --name false-20220126194239-2083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --entrypoint /usr/bin/test -v false-20220126194239-2083:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0126 19:42:42.189337   16187 oci.go:106] Successfully prepared a docker volume false-20220126194239-2083
	I0126 19:42:42.189384   16187 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:42:42.189399   16187 kic.go:179] Starting extracting preloaded images to volume ...
	I0126 19:42:42.189522   16187 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220126194239-2083:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0126 19:42:47.620893   16187 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220126194239-2083:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (5.431287368s)
	I0126 19:42:47.620926   16187 kic.go:188] duration metric: took 5.431523 seconds to extract preloaded images to volume
	I0126 19:42:47.621065   16187 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0126 19:42:47.806258   16187 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220126194239-2083 --name false-20220126194239-2083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220126194239-2083 --network false-20220126194239-2083 --ip 192.168.49.2 --volume false-20220126194239-2083:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0126 19:42:49.874795   16187 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220126194239-2083 --name false-20220126194239-2083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220126194239-2083 --network false-20220126194239-2083 --ip 192.168.49.2 --volume false-20220126194239-2083:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (2.068454599s)
	I0126 19:42:49.874925   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Running}}
	I0126 19:42:49.992396   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:42:50.105832   16187 cli_runner.go:133] Run: docker exec false-20220126194239-2083 stat /var/lib/dpkg/alternatives/iptables
	I0126 19:42:50.276299   16187 oci.go:281] the created container "false-20220126194239-2083" has a running status.
	I0126 19:42:50.276344   16187 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa...
	I0126 19:42:50.476843   16187 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0126 19:42:50.650475   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:42:50.766966   16187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0126 19:42:50.766984   16187 kic_runner.go:114] Args: [docker exec --privileged false-20220126194239-2083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0126 19:42:50.943152   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:42:51.057634   16187 machine.go:88] provisioning docker machine ...
	I0126 19:42:51.057676   16187 ubuntu.go:169] provisioning hostname "false-20220126194239-2083"
	I0126 19:42:51.057813   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:51.177212   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:51.177395   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:51.177406   16187 main.go:130] libmachine: About to run SSH command:
	sudo hostname false-20220126194239-2083 && echo "false-20220126194239-2083" | sudo tee /etc/hostname
	I0126 19:42:51.323266   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: false-20220126194239-2083
	
	I0126 19:42:51.323399   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:51.437486   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:51.437648   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:51.437662   16187 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-20220126194239-2083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220126194239-2083/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-20220126194239-2083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0126 19:42:51.572316   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0126 19:42:51.572332   16187 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube}
	I0126 19:42:51.572359   16187 ubuntu.go:177] setting up certificates
	I0126 19:42:51.572364   16187 provision.go:83] configureAuth start
	I0126 19:42:51.572465   16187 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220126194239-2083
	I0126 19:42:51.687022   16187 provision.go:138] copyHostCerts
	I0126 19:42:51.687126   16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem, removing ...
	I0126 19:42:51.687135   16187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
	I0126 19:42:51.687237   16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem (1078 bytes)
	I0126 19:42:51.687430   16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem, removing ...
	I0126 19:42:51.687442   16187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
	I0126 19:42:51.687500   16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem (1123 bytes)
	I0126 19:42:51.687642   16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem, removing ...
	I0126 19:42:51.687648   16187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
	I0126 19:42:51.687703   16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem (1679 bytes)
	I0126 19:42:51.687825   16187 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem org=jenkins.false-20220126194239-2083 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube false-20220126194239-2083]
	I0126 19:42:51.851710   16187 provision.go:172] copyRemoteCerts
	I0126 19:42:51.851774   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0126 19:42:51.851839   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:51.974292   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:42:52.069077   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0126 19:42:52.086822   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0126 19:42:52.103854   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0126 19:42:52.123802   16187 provision.go:86] duration metric: configureAuth took 551.423385ms
	I0126 19:42:52.123821   16187 ubuntu.go:193] setting minikube options for container-runtime
	I0126 19:42:52.123976   16187 config.go:176] Loaded profile config "false-20220126194239-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:42:52.124056   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:52.242802   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:52.242979   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:52.243004   16187 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0126 19:42:52.380770   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0126 19:42:52.380789   16187 ubuntu.go:71] root file system type: overlay
	I0126 19:42:52.380962   16187 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0126 19:42:52.381046   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:52.498721   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:52.498905   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:52.498960   16187 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0126 19:42:52.646699   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0126 19:42:52.646867   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:52.773936   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:52.774113   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:52.774125   16187 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0126 19:43:00.208704   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-27 03:42:52.661753287 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0126 19:43:00.208730   16187 machine.go:91] provisioned docker machine in 9.151062453s
	I0126 19:43:00.208737   16187 client.go:171] LocalClient.Create took 19.990090769s
	I0126 19:43:00.208751   16187 start.go:168] duration metric: libmachine.API.Create for "false-20220126194239-2083" took 19.990140109s
	I0126 19:43:00.208761   16187 start.go:267] post-start starting for "false-20220126194239-2083" (driver="docker")
	I0126 19:43:00.208765   16187 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0126 19:43:00.208843   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0126 19:43:00.208970   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:00.325853   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:00.422977   16187 ssh_runner.go:195] Run: cat /etc/os-release
	I0126 19:43:00.427231   16187 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0126 19:43:00.427248   16187 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0126 19:43:00.427254   16187 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0126 19:43:00.427260   16187 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0126 19:43:00.427272   16187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/addons for local assets ...
	I0126 19:43:00.427366   16187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files for local assets ...
	I0126 19:43:00.427513   16187 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem -> 20832.pem in /etc/ssl/certs
	I0126 19:43:00.427682   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0126 19:43:00.435767   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /etc/ssl/certs/20832.pem (1708 bytes)
	I0126 19:43:00.460882   16187 start.go:270] post-start completed in 252.113397ms
	I0126 19:43:00.461646   16187 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220126194239-2083
	I0126 19:43:00.576378   16187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/config.json ...
	I0126 19:43:00.576821   16187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0126 19:43:00.576895   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:00.695056   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:00.786644   16187 start.go:129] duration metric: createHost completed in 20.595035356s
	I0126 19:43:00.786666   16187 start.go:80] releasing machines lock for "false-20220126194239-2083", held for 20.595157378s
	I0126 19:43:00.786772   16187 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220126194239-2083
	I0126 19:43:00.907364   16187 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0126 19:43:00.907372   16187 ssh_runner.go:195] Run: systemctl --version
	I0126 19:43:00.907470   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:00.907469   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:01.061493   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:01.061518   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:01.154816   16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0126 19:43:01.346809   16187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 19:43:01.356953   16187 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0126 19:43:01.357035   16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0126 19:43:01.366744   16187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0126 19:43:01.381810   16187 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0126 19:43:01.441503   16187 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0126 19:43:01.506444   16187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 19:43:01.518567   16187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0126 19:43:01.585286   16187 ssh_runner.go:195] Run: sudo systemctl start docker
	I0126 19:43:01.596054   16187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 19:43:01.650471   16187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 19:43:01.716065   16187 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0126 19:43:01.716175   16187 cli_runner.go:133] Run: docker exec -t false-20220126194239-2083 dig +short host.docker.internal
	I0126 19:43:01.884979   16187 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0126 19:43:01.885066   16187 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0126 19:43:01.889882   16187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 19:43:01.899220   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:02.038125   16187 out.go:176]   - kubelet.housekeeping-interval=5m
	I0126 19:43:02.038197   16187 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:43:02.038278   16187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 19:43:02.068994   16187 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 19:43:02.069010   16187 docker.go:537] Images already preloaded, skipping extraction
	I0126 19:43:02.069112   16187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 19:43:02.098561   16187 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 19:43:02.098577   16187 cache_images.go:84] Images are preloaded, skipping loading
	I0126 19:43:02.098698   16187 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0126 19:43:02.182111   16187 cni.go:93] Creating CNI manager for "false"
	I0126 19:43:02.182134   16187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0126 19:43:02.182149   16187 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220126194239-2083 NodeName:false-20220126194239-2083 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0126 19:43:02.182251   16187 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "false-20220126194239-2083"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0126 19:43:02.182317   16187 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=false-20220126194239-2083 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0126 19:43:02.182382   16187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0126 19:43:02.190657   16187 binaries.go:44] Found k8s binaries, skipping transfer
	I0126 19:43:02.190720   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0126 19:43:02.198112   16187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0126 19:43:02.211392   16187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0126 19:43:02.224810   16187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0126 19:43:02.237793   16187 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0126 19:43:02.241904   16187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 19:43:02.252081   16187 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083 for IP: 192.168.49.2
	I0126 19:43:02.252217   16187 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0126 19:43:02.252271   16187 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0126 19:43:02.252325   16187 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.key
	I0126 19:43:02.252345   16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.crt with IP's: []
	I0126 19:43:02.349705   16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.crt ...
	I0126 19:43:02.349721   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.crt: {Name:mkaf5b8adef5fead697514791bcb21a11dc46f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.350031   16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.key ...
	I0126 19:43:02.350041   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.key: {Name:mkb14dfaa85ac1cb104e5665db474d639cd0b2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.350241   16187 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2
	I0126 19:43:02.350261   16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0126 19:43:02.454254   16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2 ...
	I0126 19:43:02.454275   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2: {Name:mka3b43595548da8b0105a569e9eb6ac90195069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.454559   16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2 ...
	I0126 19:43:02.454569   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2: {Name:mk73ebcee90bed93a03a9ffc206cc5005dbc71e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.454748   16187 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt
	I0126 19:43:02.454933   16187 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key
	I0126 19:43:02.455194   16187 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key
	I0126 19:43:02.455222   16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt with IP's: []
	I0126 19:43:02.521212   16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt ...
	I0126 19:43:02.521230   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt: {Name:mk65a44fb27fa55ccd5799764b8abbdde192f657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.521524   16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key ...
	I0126 19:43:02.521533   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key: {Name:mk7c73892718b6261c7ef58c42a9cfc9860fadd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.521967   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem (1338 bytes)
	W0126 19:43:02.522022   16187 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083_empty.pem, impossibly tiny 0 bytes
	I0126 19:43:02.522032   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1679 bytes)
	I0126 19:43:02.522070   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0126 19:43:02.522109   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0126 19:43:02.522143   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1679 bytes)
	I0126 19:43:02.522218   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem (1708 bytes)
	I0126 19:43:02.523176   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0126 19:43:02.540724   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0126 19:43:02.560949   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0126 19:43:02.577831   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0126 19:43:02.594684   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0126 19:43:02.612559   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0126 19:43:02.630371   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0126 19:43:02.647386   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0126 19:43:02.664260   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /usr/share/ca-certificates/20832.pem (1708 bytes)
	I0126 19:43:02.681944   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0126 19:43:02.699036   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem --> /usr/share/ca-certificates/2083.pem (1338 bytes)
	I0126 19:43:02.716299   16187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0126 19:43:02.729500   16187 ssh_runner.go:195] Run: openssl version
	I0126 19:43:02.734998   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20832.pem && ln -fs /usr/share/ca-certificates/20832.pem /etc/ssl/certs/20832.pem"
	I0126 19:43:02.742793   16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20832.pem
	I0126 19:43:02.746985   16187 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:49 /usr/share/ca-certificates/20832.pem
	I0126 19:43:02.747034   16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20832.pem
	I0126 19:43:02.753036   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20832.pem /etc/ssl/certs/3ec20f2e.0"
	I0126 19:43:02.760655   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0126 19:43:02.768289   16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:43:02.772749   16187 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:43 /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:43:02.772801   16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:43:02.778530   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0126 19:43:02.786671   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2083.pem && ln -fs /usr/share/ca-certificates/2083.pem /etc/ssl/certs/2083.pem"
	I0126 19:43:02.794929   16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2083.pem
	I0126 19:43:02.799103   16187 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:49 /usr/share/ca-certificates/2083.pem
	I0126 19:43:02.799157   16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2083.pem
	I0126 19:43:02.804688   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2083.pem /etc/ssl/certs/51391683.0"
	I0126 19:43:02.813601   16187 kubeadm.go:388] StartCluster: {Name:false-20220126194239-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 19:43:02.813750   16187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0126 19:43:02.840880   16187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0126 19:43:02.849308   16187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0126 19:43:02.856711   16187 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0126 19:43:02.856763   16187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0126 19:43:02.863969   16187 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0126 19:43:02.863991   16187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0126 19:43:03.353659   16187 out.go:203]   - Generating certificates and keys ...
	I0126 19:43:05.421713   16187 out.go:203]   - Booting up control plane ...
	I0126 19:43:16.958385   16187 out.go:203]   - Configuring RBAC rules ...
	I0126 19:43:17.344444   16187 cni.go:93] Creating CNI manager for "false"
	I0126 19:43:17.344484   16187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0126 19:43:17.344585   16187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=df496161bea02a920f5582b36f44351d955cdf25 minikube.k8s.io/name=false-20220126194239-2083 minikube.k8s.io/updated_at=2022_01_26T19_43_17_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 19:43:17.344639   16187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 19:43:17.605712   16187 kubeadm.go:867] duration metric: took 261.212373ms to wait for elevateKubeSystemPrivileges.
	I0126 19:43:17.605765   16187 ops.go:34] apiserver oom_adj: -16
	I0126 19:43:17.605778   16187 kubeadm.go:390] StartCluster complete in 14.792168658s
	I0126 19:43:17.605797   16187 settings.go:142] acquiring lock: {Name:mkb44f1d9eb2a533b4b0cb7d08d08147a57d8376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:17.605909   16187 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 19:43:17.606849   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk2720725a2c48b74a1f04b19ffbd0e9d0a29d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:18.133440   16187 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20220126194239-2083" rescaled to 1
	I0126 19:43:18.133485   16187 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0126 19:43:18.133496   16187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0126 19:43:18.133510   16187 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0126 19:43:18.159770   16187 out.go:176] * Verifying Kubernetes components...
	I0126 19:43:18.133661   16187 config.go:176] Loaded profile config "false-20220126194239-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:43:18.159826   16187 addons.go:65] Setting default-storageclass=true in profile "false-20220126194239-2083"
	I0126 19:43:18.159833   16187 addons.go:65] Setting storage-provisioner=true in profile "false-20220126194239-2083"
	I0126 19:43:18.159850   16187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20220126194239-2083"
	I0126 19:43:18.159856   16187 addons.go:153] Setting addon storage-provisioner=true in "false-20220126194239-2083"
	W0126 19:43:18.159862   16187 addons.go:165] addon storage-provisioner should already be in state true
	I0126 19:43:18.159871   16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0126 19:43:18.159886   16187 host.go:66] Checking if "false-20220126194239-2083" exists ...
	I0126 19:43:18.160546   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:43:18.180077   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:43:18.193711   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:18.193746   16187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0126 19:43:18.315892   16187 addons.go:153] Setting addon default-storageclass=true in "false-20220126194239-2083"
	W0126 19:43:18.315912   16187 addons.go:165] addon default-storageclass should already be in state true
	I0126 19:43:18.315932   16187 host.go:66] Checking if "false-20220126194239-2083" exists ...
	I0126 19:43:18.316470   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:43:18.358709   16187 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0126 19:43:18.358863   16187 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 19:43:18.358875   16187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0126 19:43:18.358979   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:18.370404   16187 api_server.go:51] waiting for apiserver process to appear ...
	I0126 19:43:18.370461   16187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0126 19:43:18.469616   16187 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0126 19:43:18.469628   16187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0126 19:43:18.469718   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:18.501219   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:18.598643   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:18.607810   16187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 19:43:18.708608   16187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0126 19:43:18.976809   16187 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0126 19:43:18.976869   16187 api_server.go:71] duration metric: took 843.363854ms to wait for apiserver process to appear ...
	I0126 19:43:18.976885   16187 api_server.go:87] waiting for apiserver healthz status ...
	I0126 19:43:18.976898   16187 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58661/healthz ...
	I0126 19:43:18.984821   16187 api_server.go:266] https://127.0.0.1:58661/healthz returned 200:
	ok
	I0126 19:43:18.986256   16187 api_server.go:140] control plane version: v1.23.2
	I0126 19:43:18.986279   16187 api_server.go:130] duration metric: took 9.387022ms to wait for apiserver health ...
	I0126 19:43:18.986290   16187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0126 19:43:18.993753   16187 system_pods.go:59] 4 kube-system pods found
	I0126 19:43:18.993769   16187 system_pods.go:61] "etcd-false-20220126194239-2083" [5d961796-78c4-4a12-a763-7c511ecbdcfd] Pending
	I0126 19:43:18.993773   16187 system_pods.go:61] "kube-apiserver-false-20220126194239-2083" [46c4d59c-4b36-436f-a0a9-a8fccf0d9b5e] Pending
	I0126 19:43:18.993776   16187 system_pods.go:61] "kube-controller-manager-false-20220126194239-2083" [01c57f67-4273-4897-a260-b8debc638cb4] Pending
	I0126 19:43:18.993784   16187 system_pods.go:61] "kube-scheduler-false-20220126194239-2083" [a8ddb167-27b0-4d82-abce-2ab8ebed59a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0126 19:43:18.993790   16187 system_pods.go:74] duration metric: took 7.495112ms to wait for pod list to return data ...
	I0126 19:43:18.993796   16187 kubeadm.go:542] duration metric: took 860.29603ms to wait for : map[apiserver:true system_pods:true] ...
	I0126 19:43:18.993807   16187 node_conditions.go:102] verifying NodePressure condition ...
	I0126 19:43:18.997542   16187 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0126 19:43:18.997560   16187 node_conditions.go:123] node cpu capacity is 6
	I0126 19:43:18.997573   16187 node_conditions.go:105] duration metric: took 3.760208ms to run NodePressure ...
	I0126 19:43:18.997581   16187 start.go:213] waiting for startup goroutines ...
	I0126 19:43:19.044196   16187 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0126 19:43:19.044213   16187 addons.go:417] enableAddons completed in 910.711919ms
	I0126 19:43:19.103103   16187 start.go:496] kubectl: 1.19.7, cluster: 1.23.2 (minor skew: 4)
	I0126 19:43:19.129240   16187 out.go:176] 
	W0126 19:43:19.129391   16187 out.go:241] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	I0126 19:43:19.176018   16187 out.go:176]   - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	I0126 19:43:19.202330   16187 out.go:176] * Done! kubectl is now configured to use "false-20220126194239-2083" cluster and "default" namespace by default

                                                
                                                
** /stderr **
net_test.go:83: *** TestNetworkPlugins/group/false FAILED at 2022-01-26 19:43:19.291671 -0800 PST m=+3693.027397115
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestNetworkPlugins/group/false]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect false-20220126194239-2083
helpers_test.go:236: (dbg) docker inspect false-20220126194239-2083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3c450d29a2e0eaee335d94b456bfff5f6ef9079af2089ab96f5edd88b0e397f",
	        "Created": "2022-01-27T03:42:47.934606981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-01-27T03:42:49.883210349Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/d3c450d29a2e0eaee335d94b456bfff5f6ef9079af2089ab96f5edd88b0e397f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3c450d29a2e0eaee335d94b456bfff5f6ef9079af2089ab96f5edd88b0e397f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3c450d29a2e0eaee335d94b456bfff5f6ef9079af2089ab96f5edd88b0e397f/hosts",
	        "LogPath": "/var/lib/docker/containers/d3c450d29a2e0eaee335d94b456bfff5f6ef9079af2089ab96f5edd88b0e397f/d3c450d29a2e0eaee335d94b456bfff5f6ef9079af2089ab96f5edd88b0e397f-json.log",
	        "Name": "/false-20220126194239-2083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "false-20220126194239-2083:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "false-20220126194239-2083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/86ee9d48fd76992ab2b8ac533954c8b88332dd77319b14650c3635309dbc0531-init/diff:/var/lib/docker/overlay2/2a7e6cc2001e11fee4ebe50e26987d2294311d4c6e5cba4860e9cc3aa8c775f1/diff:/var/lib/docker/overlay2/75225079dcac1f7a5606093a81a0a8c373eb4da3d65cd90ddbcfb69d2624fe87/diff:/var/lib/docker/overlay2/e102d91ef30a8f3119bda2eca1ea56fa89f80d6bd06428c2d337ffd442f31e39/diff:/var/lib/docker/overlay2/2e906d2d6d22daf943a0aea5eceeb3554194635958e3c99ebafc987a6a3773c6/diff:/var/lib/docker/overlay2/ea570dd14e59999ac24760ec8128afc732d7e03000b0c846ff57f36063ea4857/diff:/var/lib/docker/overlay2/52f4d1be8ed49d3c3e4aa65645805bdeaefff9436d3a0be005ee0c01f22d6524/diff:/var/lib/docker/overlay2/4fab0356adacc3534f74fba3a295734d4364ed062cbb008da2cf4b6b7d0a93fa/diff:/var/lib/docker/overlay2/0df261bf0a8b8293f161caa2233324aa12c0c15b0095ec5b9ec30c4d8c0f1289/diff:/var/lib/docker/overlay2/9701cf193b3398acd0181490ea777089d7e3fbf7a4a0a2d0133554ca86995760/diff:/var/lib/docker/overlay2/b88365
0e947c28d0964c4da2c40a091dc8123e93bc57eed9f0a851e47c941aac/diff:/var/lib/docker/overlay2/7032585a99df9629540836d964bf1e9b2eebec0f02316aac93b747b173e0cad8/diff:/var/lib/docker/overlay2/62b91bb57a81a34f97d5f6ffd83241a912943cde283c183d9a07f55a92672949/diff:/var/lib/docker/overlay2/369d3bd409332d53570e4ec75c6c2ba47891be255c8ece7b9202131cd36b4404/diff:/var/lib/docker/overlay2/ed18852bc2469c676a9ed0481adf136a8d353167b3a7f52bfee4d79935c26139/diff:/var/lib/docker/overlay2/5bb2ee64dcdfe2728f75773490009b95fb9b909d064636feaf8075bbd13c85c9/diff:/var/lib/docker/overlay2/ef6ff5c7032fb5767e31428900ce994de894cd60272e9012de50ff2d7d38be0f/diff:/var/lib/docker/overlay2/33e161d7d38d725bad8809038472bc0ccdfe09cd124895bcad2a8f5f615b4de4/diff:/var/lib/docker/overlay2/95c5592d76807e381c893b4e3faf91eb98f0b89f3d8e812e1602b3fbd6282eba/diff:/var/lib/docker/overlay2/bbfc969d501deeffb78f7b6e93d2c0d17ddad78d9d1d27eaa4ada4e2dedfc37e/diff:/var/lib/docker/overlay2/31e96d0246e99ddd4d5b90503679b75ecf7b098c124c028b187600eb4d938dd8/diff:/var/lib/d
ocker/overlay2/505d8b9cc5c8969dbce6fdf7cacddef94aa6609dffec10b704cdb6e69d6ce0e5/diff:/var/lib/docker/overlay2/411cfa777875b03e8c4ef0055bbb11dcffc8fea260819c75820efae78008687e/diff:/var/lib/docker/overlay2/216d5777c7f285f0744036a8e586e1ee61af673b4321fb8b088a0e8ebbfe819e/diff:/var/lib/docker/overlay2/a71aeff8d8919ccf39732643ee63d3083de635457c6382fbc8a3e84276c103ad/diff:/var/lib/docker/overlay2/ead8709d3fc0c08d0eac96bbdfe00216ec12c8403a39ea52b3e69288755d8d73/diff:/var/lib/docker/overlay2/3711201ea0f5fa1be41d4795c382348b51f31ee54b9d604593f80b3ee34d31fd/diff:/var/lib/docker/overlay2/75c12bc72fde0bb98e5a21f6648be245126e8507276c4725c9e55305fb3d9217/diff:/var/lib/docker/overlay2/92c133d0073dcdaf629d9697bdac9cf84fceb9554b98cdf17c0887c87ef2be89/diff:/var/lib/docker/overlay2/c067d51d62eb76562b4043fbd618dbf87f61fc61d77e3024f092098dfea90387/diff:/var/lib/docker/overlay2/c23441a7699cd1f6eda9af3296a891160f06bd8d2e9537464e4ae430e516bd99/diff:/var/lib/docker/overlay2/7c99ba0f262e34adde8bc1b90f245985daaa48e03f83b956e7984ae4cb1
c5647/diff:/var/lib/docker/overlay2/ae6fd8924a1817492c2e12b25efff0f71e29bae42ec5f17f20f441b40f2db1f1/diff:/var/lib/docker/overlay2/3d465d35153dc134daba19a1d4b244a518037a2e024f84fbbc42e3c450cf8e94/diff:/var/lib/docker/overlay2/7258fe9f4b6805c2ee0ae748e188ffea153a1f1b9ce4fa950f9dbb124aed6580/diff:/var/lib/docker/overlay2/229e9099d6c560afa616010365435d1fe1cc6f000768b8e966fc3f924ba7c604/diff:/var/lib/docker/overlay2/ddc9dba6629b973d3038b7a422482fc243bae154322939ebaa77a75368dcfa08/diff:/var/lib/docker/overlay2/45e8baed0cb609a322bc42eb20a2d4afaa91f06f1affcbba332bee6f8714c6bc/diff:/var/lib/docker/overlay2/73346911f8c5f88f14bab74e68043fce4b3e7736b0a333b5ae34d44343013ae4/diff:/var/lib/docker/overlay2/12b12379bbfe5dda93638d4a87b9257deeb1643be4ffddf5551a8b41e1b41a7f/diff:/var/lib/docker/overlay2/ea6e8b819a378644e70f2185a7b51db37f8c0d24f8b6648ad388d74f08c2c510/diff:/var/lib/docker/overlay2/7b3462df9d94fb751b121776d729f78d0bc8acd4e3dd1bf143ddef20ed8733d1/diff:/var/lib/docker/overlay2/1ac60e0a0574910c05a20192d5988606665d31
01ced6bdbc31b7660cd8431283/diff:/var/lib/docker/overlay2/c6cdf5fd609878026154660951e80c9c6bc61a49cd2d889fbdccea6c8c36d474/diff:/var/lib/docker/overlay2/46c08365e5d94e0fcaca61e53b0d880b1b42b9c1387136f352318dca068deef3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86ee9d48fd76992ab2b8ac533954c8b88332dd77319b14650c3635309dbc0531/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86ee9d48fd76992ab2b8ac533954c8b88332dd77319b14650c3635309dbc0531/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86ee9d48fd76992ab2b8ac533954c8b88332dd77319b14650c3635309dbc0531/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "false-20220126194239-2083",
	                "Source": "/var/lib/docker/volumes/false-20220126194239-2083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "false-20220126194239-2083",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "false-20220126194239-2083",
	                "name.minikube.sigs.k8s.io": "false-20220126194239-2083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "997873f28fc2d758fd3f9131619a506f964c27f941f8a5a3598919650d4579c6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58662"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58663"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58664"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58660"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58661"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/997873f28fc2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "false-20220126194239-2083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d3c450d29a2e",
	                        "false-20220126194239-2083"
	                    ],
	                    "NetworkID": "c8884b4570bdaf142e77cff1a519efba757242219c92bb9cdfcee8c56dc82af4",
	                    "EndpointID": "d9468113f262f79eb72841dd5680dec00e56fdf3178c50bbff5e2b0eb373387d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p false-20220126194239-2083 -n false-20220126194239-2083
helpers_test.go:245: <<< TestNetworkPlugins/group/false FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestNetworkPlugins/group/false]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p false-20220126194239-2083 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p false-20220126194239-2083 logs -n 25: (1.991828221s)
helpers_test.go:253: TestNetworkPlugins/group/false logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                     | NoKubernetes-20220126193232-2083       | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:35:10 PST | Wed, 26 Jan 2022 19:35:19 PST |
	|         | NoKubernetes-20220126193232-2083       |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220126193519-2083 | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:35:19 PST | Wed, 26 Jan 2022 19:36:48 PST |
	|         | kubernetes-upgrade-20220126193519-2083 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |                                        |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20220126193519-2083 | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:36:48 PST | Wed, 26 Jan 2022 19:37:04 PST |
	|         | kubernetes-upgrade-20220126193519-2083 |                                        |         |         |                               |                               |
	| start   | -p                                     | missing-upgrade-20220126193433-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:35:45 PST | Wed, 26 Jan 2022 19:37:18 PST |
	|         | missing-upgrade-20220126193433-2083    |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	| delete  | -p                                     | missing-upgrade-20220126193433-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:37:18 PST | Wed, 26 Jan 2022 19:37:23 PST |
	|         | missing-upgrade-20220126193433-2083    |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220126193519-2083 | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:37:05 PST | Wed, 26 Jan 2022 19:38:04 PST |
	|         | kubernetes-upgrade-20220126193519-2083 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3-rc.0      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |                                        |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220126193519-2083 | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:38:05 PST | Wed, 26 Jan 2022 19:38:45 PST |
	|         | kubernetes-upgrade-20220126193519-2083 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3-rc.0      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |                                        |                                        |         |         |                               |                               |
	| delete  | -p                                     | kubernetes-upgrade-20220126193519-2083 | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:38:45 PST | Wed, 26 Jan 2022 19:38:59 PST |
	|         | kubernetes-upgrade-20220126193519-2083 |                                        |         |         |                               |                               |
	| start   | -p                                     | stopped-upgrade-20220126193723-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:39:12 PST | Wed, 26 Jan 2022 19:39:55 PST |
	|         | stopped-upgrade-20220126193723-2083    |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	| logs    | -p                                     | stopped-upgrade-20220126193723-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:39:55 PST | Wed, 26 Jan 2022 19:39:58 PST |
	|         | stopped-upgrade-20220126193723-2083    |                                        |         |         |                               |                               |
	| delete  | -p                                     | stopped-upgrade-20220126193723-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:39:58 PST | Wed, 26 Jan 2022 19:40:06 PST |
	|         | stopped-upgrade-20220126193723-2083    |                                        |         |         |                               |                               |
	| start   | -p                                     | running-upgrade-20220126193859-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:40:33 PST | Wed, 26 Jan 2022 19:42:03 PST |
	|         | running-upgrade-20220126193859-2083    |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	| start   | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:40:07 PST | Wed, 26 Jan 2022 19:42:04 PST |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=docker             |                                        |         |         |                               |                               |
	| delete  | -p                                     | running-upgrade-20220126193859-2083    | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:03 PST | Wed, 26 Jan 2022 19:42:10 PST |
	|         | running-upgrade-20220126193859-2083    |                                        |         |         |                               |                               |
	| start   | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:04 PST | Wed, 26 Jan 2022 19:42:11 PST |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	| pause   | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:11 PST | Wed, 26 Jan 2022 19:42:12 PST |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	| unpause | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:13 PST | Wed, 26 Jan 2022 19:42:14 PST |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	| pause   | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:14 PST | Wed, 26 Jan 2022 19:42:15 PST |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	| delete  | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:15 PST | Wed, 26 Jan 2022 19:42:32 PST |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	| profile | list --output json                     | minikube                               | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:32 PST | Wed, 26 Jan 2022 19:42:36 PST |
	| delete  | -p pause-20220126194007-2083           | pause-20220126194007-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:36 PST | Wed, 26 Jan 2022 19:42:37 PST |
	| delete  | -p kubenet-20220126194237-2083         | kubenet-20220126194237-2083            | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:37 PST | Wed, 26 Jan 2022 19:42:38 PST |
	| delete  | -p flannel-20220126194238-2083         | flannel-20220126194238-2083            | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:38 PST | Wed, 26 Jan 2022 19:42:39 PST |
	| start   | -p                                     | force-systemd-env-20220126194210-2083  | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:10 PST | Wed, 26 Jan 2022 19:43:15 PST |
	|         | force-systemd-env-20220126194210-2083  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr -v=5   |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	| start   | -p false-20220126194239-2083           | false-20220126194239-2083              | jenkins | v1.25.1 | Wed, 26 Jan 2022 19:42:39 PST | Wed, 26 Jan 2022 19:43:19 PST |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --alsologtostderr --cni=false          |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/26 19:42:39
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0126 19:42:39.262397   16187 out.go:297] Setting OutFile to fd 1 ...
	I0126 19:42:39.262523   16187 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:42:39.262528   16187 out.go:310] Setting ErrFile to fd 2...
	I0126 19:42:39.262532   16187 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:42:39.262606   16187 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 19:42:39.262923   16187 out.go:304] Setting JSON to false
	I0126 19:42:39.288616   16187 start.go:112] hostinfo: {"hostname":"37309.local","uptime":4334,"bootTime":1643250625,"procs":334,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 19:42:39.288712   16187 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 19:42:39.314815   16187 out.go:176] * [false-20220126194239-2083] minikube v1.25.1 on Darwin 11.2.3
	I0126 19:42:39.314909   16187 notify.go:174] Checking for updates...
	I0126 19:42:39.361910   16187 out.go:176]   - MINIKUBE_LOCATION=13251
	I0126 19:42:39.387785   16187 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 19:42:39.413891   16187 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0126 19:42:39.439887   16187 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0126 19:42:39.465765   16187 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0126 19:42:39.466262   16187 config.go:176] Loaded profile config "force-systemd-env-20220126194210-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:42:39.466321   16187 driver.go:344] Setting default libvirt URI to qemu:///system
	I0126 19:42:39.569121   16187 docker.go:132] docker version: linux-20.10.6
	I0126 19:42:39.569242   16187 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 19:42:39.762459   16187 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-27 03:42:39.689213479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 19:42:39.789259   16187 out.go:176] * Using the docker driver based on user configuration
	I0126 19:42:39.789284   16187 start.go:281] selected driver: docker
	I0126 19:42:39.789292   16187 start.go:798] validating driver "docker" against <nil>
	I0126 19:42:39.789313   16187 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0126 19:42:39.791657   16187 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 19:42:39.980931   16187 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-27 03:42:39.909650302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 19:42:39.981128   16187 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0126 19:42:39.981298   16187 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0126 19:42:39.981315   16187 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0126 19:42:39.981335   16187 cni.go:93] Creating CNI manager for "false"
	I0126 19:42:39.981345   16187 start_flags.go:302] config:
	{Name:false-20220126194239-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 19:42:40.028570   16187 out.go:176] * Starting control plane node false-20220126194239-2083 in cluster false-20220126194239-2083
	I0126 19:42:40.028613   16187 cache.go:120] Beginning downloading kic base image for docker with docker
	I0126 19:42:40.075353   16187 out.go:176] * Pulling base image ...
	I0126 19:42:40.075397   16187 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:42:40.075447   16187 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0126 19:42:40.075456   16187 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0126 19:42:40.075470   16187 cache.go:57] Caching tarball of preloaded images
	I0126 19:42:40.075577   16187 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0126 19:42:40.075594   16187 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0126 19:42:40.076089   16187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/config.json ...
	I0126 19:42:40.076181   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/config.json: {Name:mk31963e8dbfb6a0d0b2d9f061bef6876da7befc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:40.191240   16187 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0126 19:42:40.191262   16187 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0126 19:42:40.191271   16187 cache.go:208] Successfully downloaded all kic artifacts
	I0126 19:42:40.191336   16187 start.go:313] acquiring machines lock for false-20220126194239-2083: {Name:mk198c49e42e95e9e77c9ad201f40492a321a0bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 19:42:40.191479   16187 start.go:317] acquired machines lock for "false-20220126194239-2083" in 131.519µs
	I0126 19:42:40.191508   16187 start.go:89] Provisioning new machine with config: &{Name:false-20220126194239-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:do
cker ControlPlane:true Worker:true}
	I0126 19:42:40.191570   16187 start.go:126] createHost starting for "" (driver="docker")
	I0126 19:42:38.315850   15806 main.go:130] libmachine: SSH cmd err, output: <nil>: force-systemd-env-20220126194210-2083
	
	I0126 19:42:38.315947   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:38.440908   15806 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:38.441065   15806 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58194 <nil> <nil>}
	I0126 19:42:38.441081   15806 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-20220126194210-2083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-20220126194210-2083/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-20220126194210-2083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0126 19:42:38.578699   15806 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0126 19:42:38.578721   15806 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube}
	I0126 19:42:38.578738   15806 ubuntu.go:177] setting up certificates
	I0126 19:42:38.578746   15806 provision.go:83] configureAuth start
	I0126 19:42:38.578828   15806 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20220126194210-2083
	I0126 19:42:38.702074   15806 provision.go:138] copyHostCerts
	I0126 19:42:38.702119   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
	I0126 19:42:38.702180   15806 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem, removing ...
	I0126 19:42:38.702193   15806 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
	I0126 19:42:38.702307   15806 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem (1078 bytes)
	I0126 19:42:38.702533   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
	I0126 19:42:38.702577   15806 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem, removing ...
	I0126 19:42:38.702583   15806 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
	I0126 19:42:38.702646   15806 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem (1123 bytes)
	I0126 19:42:38.702779   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
	I0126 19:42:38.702812   15806 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem, removing ...
	I0126 19:42:38.702817   15806 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
	I0126 19:42:38.702875   15806 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem (1679 bytes)
	I0126 19:42:38.703005   15806 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-20220126194210-2083 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-20220126194210-2083]
	I0126 19:42:38.763164   15806 provision.go:172] copyRemoteCerts
	I0126 19:42:38.763216   15806 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0126 19:42:38.763280   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:38.882945   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:42:38.977673   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0126 19:42:38.977766   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0126 19:42:38.996816   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0126 19:42:38.996933   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0126 19:42:39.017737   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0126 19:42:39.017852   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0126 19:42:39.037891   15806 provision.go:86] duration metric: configureAuth took 459.134516ms
	I0126 19:42:39.037905   15806 ubuntu.go:193] setting minikube options for container-runtime
	I0126 19:42:39.038037   15806 config.go:176] Loaded profile config "force-systemd-env-20220126194210-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:42:39.038107   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:39.160156   15806 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:39.160318   15806 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58194 <nil> <nil>}
	I0126 19:42:39.160330   15806 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0126 19:42:39.295940   15806 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0126 19:42:39.295961   15806 ubuntu.go:71] root file system type: overlay
	I0126 19:42:39.296105   15806 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0126 19:42:39.296200   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:39.471238   15806 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:39.471532   15806 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58194 <nil> <nil>}
	I0126 19:42:39.471618   15806 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0126 19:42:39.617145   15806 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0126 19:42:39.617258   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:39.737288   15806 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:39.737459   15806 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58194 <nil> <nil>}
	I0126 19:42:39.737471   15806 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0126 19:42:40.218399   16187 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0126 19:42:40.218592   16187 start.go:160] libmachine.API.Create for "false-20220126194239-2083" (driver="docker")
	I0126 19:42:40.218621   16187 client.go:168] LocalClient.Create starting
	I0126 19:42:40.218725   16187 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem
	I0126 19:42:40.244074   16187 main.go:130] libmachine: Decoding PEM data...
	I0126 19:42:40.244117   16187 main.go:130] libmachine: Parsing certificate...
	I0126 19:42:40.244252   16187 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem
	I0126 19:42:40.244326   16187 main.go:130] libmachine: Decoding PEM data...
	I0126 19:42:40.244346   16187 main.go:130] libmachine: Parsing certificate...
	I0126 19:42:40.245123   16187 cli_runner.go:133] Run: docker network inspect false-20220126194239-2083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0126 19:42:40.357964   16187 cli_runner.go:180] docker network inspect false-20220126194239-2083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0126 19:42:40.358071   16187 network_create.go:254] running [docker network inspect false-20220126194239-2083] to gather additional debugging logs...
	I0126 19:42:40.358089   16187 cli_runner.go:133] Run: docker network inspect false-20220126194239-2083
	W0126 19:42:40.475944   16187 cli_runner.go:180] docker network inspect false-20220126194239-2083 returned with exit code 1
	I0126 19:42:40.475968   16187 network_create.go:257] error running [docker network inspect false-20220126194239-2083]: docker network inspect false-20220126194239-2083: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220126194239-2083
	I0126 19:42:40.475981   16187 network_create.go:259] output of [docker network inspect false-20220126194239-2083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220126194239-2083
	
	** /stderr **
	I0126 19:42:40.476079   16187 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0126 19:42:40.590637   16187 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00043a160] misses:0}
	I0126 19:42:40.590675   16187 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0126 19:42:40.590692   16187 network_create.go:106] attempt to create docker network false-20220126194239-2083 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0126 19:42:40.590776   16187 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220126194239-2083
	I0126 19:42:41.467722   16187 network_create.go:90] docker network false-20220126194239-2083 192.168.49.0/24 created
	I0126 19:42:41.467757   16187 kic.go:106] calculated static IP "192.168.49.2" for the "false-20220126194239-2083" container
	I0126 19:42:41.467883   16187 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0126 19:42:41.580361   16187 cli_runner.go:133] Run: docker volume create false-20220126194239-2083 --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --label created_by.minikube.sigs.k8s.io=true
	I0126 19:42:41.693353   16187 oci.go:102] Successfully created a docker volume false-20220126194239-2083
	I0126 19:42:41.693482   16187 cli_runner.go:133] Run: docker run --rm --name false-20220126194239-2083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --entrypoint /usr/bin/test -v false-20220126194239-2083:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0126 19:42:42.189337   16187 oci.go:106] Successfully prepared a docker volume false-20220126194239-2083
	I0126 19:42:42.189384   16187 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:42:42.189399   16187 kic.go:179] Starting extracting preloaded images to volume ...
	I0126 19:42:42.189522   16187 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220126194239-2083:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0126 19:42:47.620893   16187 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220126194239-2083:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (5.431287368s)
	I0126 19:42:47.620926   16187 kic.go:188] duration metric: took 5.431523 seconds to extract preloaded images to volume
	I0126 19:42:47.621065   16187 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0126 19:42:47.806258   16187 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220126194239-2083 --name false-20220126194239-2083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220126194239-2083 --network false-20220126194239-2083 --ip 192.168.49.2 --volume false-20220126194239-2083:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0126 19:42:49.874795   16187 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220126194239-2083 --name false-20220126194239-2083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220126194239-2083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220126194239-2083 --network false-20220126194239-2083 --ip 192.168.49.2 --volume false-20220126194239-2083:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (2.068454599s)
	I0126 19:42:49.874925   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Running}}
	I0126 19:42:49.992396   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:42:50.105832   16187 cli_runner.go:133] Run: docker exec false-20220126194239-2083 stat /var/lib/dpkg/alternatives/iptables
	I0126 19:42:50.276299   16187 oci.go:281] the created container "false-20220126194239-2083" has a running status.
	I0126 19:42:50.276344   16187 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa...
	I0126 19:42:50.476843   16187 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0126 19:42:50.650475   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:42:50.766966   16187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0126 19:42:50.766984   16187 kic_runner.go:114] Args: [docker exec --privileged false-20220126194239-2083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0126 19:42:50.943152   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:42:51.057634   16187 machine.go:88] provisioning docker machine ...
	I0126 19:42:51.057676   16187 ubuntu.go:169] provisioning hostname "false-20220126194239-2083"
	I0126 19:42:51.057813   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:51.177212   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:51.177395   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:51.177406   16187 main.go:130] libmachine: About to run SSH command:
	sudo hostname false-20220126194239-2083 && echo "false-20220126194239-2083" | sudo tee /etc/hostname
	I0126 19:42:51.323266   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: false-20220126194239-2083
	
	I0126 19:42:51.323399   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:51.437486   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:51.437648   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:51.437662   16187 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-20220126194239-2083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220126194239-2083/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-20220126194239-2083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0126 19:42:51.572316   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0126 19:42:51.572332   16187 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube}
	I0126 19:42:51.572359   16187 ubuntu.go:177] setting up certificates
	I0126 19:42:51.572364   16187 provision.go:83] configureAuth start
	I0126 19:42:51.572465   16187 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220126194239-2083
	I0126 19:42:51.687022   16187 provision.go:138] copyHostCerts
	I0126 19:42:51.687126   16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem, removing ...
	I0126 19:42:51.687135   16187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
	I0126 19:42:51.687237   16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem (1078 bytes)
	I0126 19:42:51.687430   16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem, removing ...
	I0126 19:42:51.687442   16187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
	I0126 19:42:51.687500   16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem (1123 bytes)
	I0126 19:42:51.687642   16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem, removing ...
	I0126 19:42:51.687648   16187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
	I0126 19:42:51.687703   16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem (1679 bytes)
	I0126 19:42:51.687825   16187 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem org=jenkins.false-20220126194239-2083 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube false-20220126194239-2083]
	I0126 19:42:51.851710   16187 provision.go:172] copyRemoteCerts
	I0126 19:42:51.851774   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0126 19:42:51.851839   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:51.974292   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:42:52.069077   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0126 19:42:52.086822   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0126 19:42:52.103854   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0126 19:42:52.123802   16187 provision.go:86] duration metric: configureAuth took 551.423385ms
	I0126 19:42:52.123821   16187 ubuntu.go:193] setting minikube options for container-runtime
	I0126 19:42:52.123976   16187 config.go:176] Loaded profile config "false-20220126194239-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:42:52.124056   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:52.242802   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:52.242979   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:52.243004   16187 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0126 19:42:52.380770   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0126 19:42:52.380789   16187 ubuntu.go:71] root file system type: overlay
	I0126 19:42:52.380962   16187 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0126 19:42:52.381046   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:52.498721   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:52.498905   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:52.498960   16187 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0126 19:42:52.646699   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0126 19:42:52.646867   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:42:52.773936   16187 main.go:130] libmachine: Using SSH client type: native
	I0126 19:42:52.774113   16187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 58662 <nil> <nil>}
	I0126 19:42:52.774125   16187 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0126 19:42:51.916397   15806 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-27 03:42:39.622754191 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0126 19:42:51.916448   15806 machine.go:91] provisioned docker machine in 16.863260725s
	I0126 19:42:51.916462   15806 client.go:171] LocalClient.Create took 40.326496909s
	I0126 19:42:51.916486   15806 start.go:168] duration metric: libmachine.API.Create for "force-systemd-env-20220126194210-2083" took 40.326570683s
	I0126 19:42:51.916502   15806 start.go:267] post-start starting for "force-systemd-env-20220126194210-2083" (driver="docker")
	I0126 19:42:51.916508   15806 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0126 19:42:51.916635   15806 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0126 19:42:51.916739   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:52.035752   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:42:52.130986   15806 ssh_runner.go:195] Run: cat /etc/os-release
	I0126 19:42:52.135538   15806 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0126 19:42:52.135558   15806 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0126 19:42:52.135565   15806 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0126 19:42:52.135571   15806 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0126 19:42:52.135583   15806 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/addons for local assets ...
	I0126 19:42:52.135694   15806 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files for local assets ...
	I0126 19:42:52.135892   15806 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem -> 20832.pem in /etc/ssl/certs
	I0126 19:42:52.135901   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem -> /etc/ssl/certs/20832.pem
	I0126 19:42:52.136067   15806 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0126 19:42:52.144373   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /etc/ssl/certs/20832.pem (1708 bytes)
	I0126 19:42:52.163761   15806 start.go:270] post-start completed in 247.249403ms
	I0126 19:42:52.164352   15806 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20220126194210-2083
	I0126 19:42:52.283906   15806 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/config.json ...
	I0126 19:42:52.284297   15806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0126 19:42:52.284362   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:52.400604   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:42:52.495390   15806 start.go:129] duration metric: createHost completed in 40.954329157s
	I0126 19:42:52.495411   15806 start.go:80] releasing machines lock for "force-systemd-env-20220126194210-2083", held for 40.954473444s
	I0126 19:42:52.495548   15806 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20220126194210-2083
	I0126 19:42:52.613850   15806 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0126 19:42:52.613860   15806 ssh_runner.go:195] Run: systemctl --version
	I0126 19:42:52.613931   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:52.613947   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:52.746975   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:42:52.747014   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:42:52.838810   15806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0126 19:42:53.028316   15806 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 19:42:53.040326   15806 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0126 19:42:53.040390   15806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0126 19:42:53.050125   15806 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0126 19:42:53.066774   15806 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0126 19:42:53.128423   15806 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0126 19:42:53.188450   15806 docker.go:506] Forcing docker to use systemd as cgroup manager...
	I0126 19:42:53.188475   15806 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0126 19:42:53.202729   15806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0126 19:42:53.260074   15806 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0126 19:42:56.776995   15806 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.516900363s)
	I0126 19:42:56.777123   15806 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 19:42:56.815751   15806 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 19:42:56.902397   15806 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0126 19:42:56.902526   15806 cli_runner.go:133] Run: docker exec -t force-systemd-env-20220126194210-2083 dig +short host.docker.internal
	I0126 19:42:57.074452   15806 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0126 19:42:57.074559   15806 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0126 19:42:57.079784   15806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 19:42:57.090459   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:42:57.231953   15806 out.go:176]   - kubelet.housekeeping-interval=5m
	I0126 19:42:57.232025   15806 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:42:57.232111   15806 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 19:42:57.264107   15806 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 19:42:57.264122   15806 docker.go:537] Images already preloaded, skipping extraction
	I0126 19:42:57.264207   15806 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 19:42:57.297275   15806 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 19:42:57.297292   15806 cache_images.go:84] Images are preloaded, skipping loading
	I0126 19:42:57.297400   15806 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0126 19:42:57.376153   15806 cni.go:93] Creating CNI manager for ""
	I0126 19:42:57.376167   15806 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0126 19:42:57.376178   15806 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0126 19:42:57.376201   15806 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-20220126194210-2083 NodeName:force-systemd-env-20220126194210-2083 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd Cli
entCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0126 19:42:57.376309   15806 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "force-systemd-env-20220126194210-2083"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0126 19:42:57.376374   15806 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=force-systemd-env-20220126194210-2083 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:force-systemd-env-20220126194210-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0126 19:42:57.376443   15806 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0126 19:42:57.384632   15806 binaries.go:44] Found k8s binaries, skipping transfer
	I0126 19:42:57.384690   15806 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0126 19:42:57.392163   15806 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (390 bytes)
	I0126 19:42:57.405216   15806 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0126 19:42:57.418071   15806 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2058 bytes)
	I0126 19:42:57.430088   15806 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0126 19:42:57.433884   15806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 19:42:57.443349   15806 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083 for IP: 192.168.58.2
	I0126 19:42:57.443477   15806 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0126 19:42:57.443530   15806 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0126 19:42:57.443576   15806 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.key
	I0126 19:42:57.443593   15806 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.crt with IP's: []
	I0126 19:42:57.541630   15806 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.crt ...
	I0126 19:42:57.541645   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.crt: {Name:mk1ff9385422a6c70e7869aac7649430279ba572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:57.541950   15806 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.key ...
	I0126 19:42:57.541959   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.key: {Name:mk0ded4796842522485628d3233b048b3844e961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:57.542154   15806 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key.cee25041
	I0126 19:42:57.542175   15806 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0126 19:42:57.590104   15806 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt.cee25041 ...
	I0126 19:42:57.590118   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt.cee25041: {Name:mk21f86ddafcdb1f621314d03fd600059269f97a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:57.590359   15806 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key.cee25041 ...
	I0126 19:42:57.590367   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key.cee25041: {Name:mk437f9c231982e2d53425fb6b9cd5cf6bbcfcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:57.590536   15806 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt
	I0126 19:42:57.590690   15806 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key
	I0126 19:42:57.590848   15806 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.key
	I0126 19:42:57.590865   15806 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.crt with IP's: []
	I0126 19:42:57.770760   15806 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.crt ...
	I0126 19:42:57.770778   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.crt: {Name:mkd1d823c28554147a3f9c4ca5e6ddf759751427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:57.771055   15806 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.key ...
	I0126 19:42:57.771064   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.key: {Name:mkb1a613d5c2247d050039100a4e59ba00fc52b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:42:57.771235   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0126 19:42:57.771267   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0126 19:42:57.771288   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0126 19:42:57.771309   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0126 19:42:57.771330   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0126 19:42:57.771350   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0126 19:42:57.771371   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0126 19:42:57.771392   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0126 19:42:57.771475   15806 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem (1338 bytes)
	W0126 19:42:57.771515   15806 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083_empty.pem, impossibly tiny 0 bytes
	I0126 19:42:57.771529   15806 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1679 bytes)
	I0126 19:42:57.771567   15806 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0126 19:42:57.771601   15806 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0126 19:42:57.771636   15806 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1679 bytes)
	I0126 19:42:57.771708   15806 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem (1708 bytes)
	I0126 19:42:57.771745   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem -> /usr/share/ca-certificates/2083.pem
	I0126 19:42:57.771768   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem -> /usr/share/ca-certificates/20832.pem
	I0126 19:42:57.771793   15806 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:42:57.772467   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0126 19:42:57.790710   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0126 19:42:57.808202   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0126 19:42:57.826134   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0126 19:42:57.843461   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0126 19:42:57.860886   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0126 19:42:57.878722   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0126 19:42:57.896792   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0126 19:42:57.913272   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem --> /usr/share/ca-certificates/2083.pem (1338 bytes)
	I0126 19:42:57.930740   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /usr/share/ca-certificates/20832.pem (1708 bytes)
	I0126 19:42:57.948214   15806 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0126 19:42:57.966840   15806 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0126 19:42:57.980494   15806 ssh_runner.go:195] Run: openssl version
	I0126 19:42:57.986126   15806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2083.pem && ln -fs /usr/share/ca-certificates/2083.pem /etc/ssl/certs/2083.pem"
	I0126 19:42:57.994223   15806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2083.pem
	I0126 19:42:57.998549   15806 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:49 /usr/share/ca-certificates/2083.pem
	I0126 19:42:57.998597   15806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2083.pem
	I0126 19:42:58.004313   15806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2083.pem /etc/ssl/certs/51391683.0"
	I0126 19:42:58.012198   15806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20832.pem && ln -fs /usr/share/ca-certificates/20832.pem /etc/ssl/certs/20832.pem"
	I0126 19:42:58.020955   15806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20832.pem
	I0126 19:42:58.025129   15806 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:49 /usr/share/ca-certificates/20832.pem
	I0126 19:42:58.025173   15806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20832.pem
	I0126 19:42:58.031076   15806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20832.pem /etc/ssl/certs/3ec20f2e.0"
	I0126 19:42:58.039134   15806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0126 19:42:58.047471   15806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:42:58.051699   15806 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:43 /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:42:58.051743   15806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:42:58.057375   15806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0126 19:42:58.065365   15806 kubeadm.go:388] StartCluster: {Name:force-systemd-env-20220126194210-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:force-systemd-env-20220126194210-2083 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 19:42:58.065481   15806 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0126 19:42:58.093631   15806 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0126 19:42:58.101106   15806 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0126 19:42:58.108723   15806 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0126 19:42:58.108777   15806 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0126 19:42:58.116001   15806 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0126 19:42:58.116022   15806 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0126 19:42:58.633547   15806 out.go:203]   - Generating certificates and keys ...
	I0126 19:43:00.208704   16187 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-27 03:42:52.661753287 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0126 19:43:00.208730   16187 machine.go:91] provisioned docker machine in 9.151062453s
	I0126 19:43:00.208737   16187 client.go:171] LocalClient.Create took 19.990090769s
	I0126 19:43:00.208751   16187 start.go:168] duration metric: libmachine.API.Create for "false-20220126194239-2083" took 19.990140109s
	I0126 19:43:00.208761   16187 start.go:267] post-start starting for "false-20220126194239-2083" (driver="docker")
	I0126 19:43:00.208765   16187 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0126 19:43:00.208843   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0126 19:43:00.208970   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:00.325853   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:00.422977   16187 ssh_runner.go:195] Run: cat /etc/os-release
	I0126 19:43:00.427231   16187 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0126 19:43:00.427248   16187 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0126 19:43:00.427254   16187 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0126 19:43:00.427260   16187 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0126 19:43:00.427272   16187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/addons for local assets ...
	I0126 19:43:00.427366   16187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files for local assets ...
	I0126 19:43:00.427513   16187 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem -> 20832.pem in /etc/ssl/certs
	I0126 19:43:00.427682   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0126 19:43:00.435767   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /etc/ssl/certs/20832.pem (1708 bytes)
	I0126 19:43:00.460882   16187 start.go:270] post-start completed in 252.113397ms
	I0126 19:43:00.461646   16187 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220126194239-2083
	I0126 19:43:00.576378   16187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/config.json ...
	I0126 19:43:00.576821   16187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0126 19:43:00.576895   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:00.695056   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:00.786644   16187 start.go:129] duration metric: createHost completed in 20.595035356s
	I0126 19:43:00.786666   16187 start.go:80] releasing machines lock for "false-20220126194239-2083", held for 20.595157378s
	I0126 19:43:00.786772   16187 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220126194239-2083
	I0126 19:43:00.907364   16187 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0126 19:43:00.907372   16187 ssh_runner.go:195] Run: systemctl --version
	I0126 19:43:00.907470   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:00.907469   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:01.061493   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:01.061518   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:01.154816   16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0126 19:43:01.346809   16187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 19:43:01.356953   16187 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0126 19:43:01.357035   16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0126 19:43:01.366744   16187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0126 19:43:01.381810   16187 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0126 19:43:01.441503   16187 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0126 19:43:01.506444   16187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 19:43:01.518567   16187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0126 19:43:01.585286   16187 ssh_runner.go:195] Run: sudo systemctl start docker
	I0126 19:43:01.596054   16187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 19:43:01.650471   16187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 19:43:01.716065   16187 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0126 19:43:01.716175   16187 cli_runner.go:133] Run: docker exec -t false-20220126194239-2083 dig +short host.docker.internal
	I0126 19:43:01.884979   16187 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0126 19:43:01.885066   16187 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0126 19:43:01.889882   16187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 19:43:01.899220   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:02.038125   16187 out.go:176]   - kubelet.housekeeping-interval=5m
	I0126 19:43:02.038197   16187 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:43:02.038278   16187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 19:43:02.068994   16187 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 19:43:02.069010   16187 docker.go:537] Images already preloaded, skipping extraction
	I0126 19:43:02.069112   16187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 19:43:02.098561   16187 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 19:43:02.098577   16187 cache_images.go:84] Images are preloaded, skipping loading
	I0126 19:43:02.098698   16187 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0126 19:43:02.182111   16187 cni.go:93] Creating CNI manager for "false"
	I0126 19:43:02.182134   16187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0126 19:43:02.182149   16187 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220126194239-2083 NodeName:false-20220126194239-2083 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0126 19:43:02.182251   16187 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "false-20220126194239-2083"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0126 19:43:02.182317   16187 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=false-20220126194239-2083 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0126 19:43:02.182382   16187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0126 19:43:02.190657   16187 binaries.go:44] Found k8s binaries, skipping transfer
	I0126 19:43:02.190720   16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0126 19:43:02.198112   16187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0126 19:43:02.211392   16187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0126 19:43:02.224810   16187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0126 19:43:02.237793   16187 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0126 19:43:02.241904   16187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 19:43:02.252081   16187 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083 for IP: 192.168.49.2
	I0126 19:43:02.252217   16187 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0126 19:43:02.252271   16187 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0126 19:43:02.252325   16187 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.key
	I0126 19:43:02.252345   16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.crt with IP's: []
	I0126 19:43:02.349705   16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.crt ...
	I0126 19:43:02.349721   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.crt: {Name:mkaf5b8adef5fead697514791bcb21a11dc46f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.350031   16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.key ...
	I0126 19:43:02.350041   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/client.key: {Name:mkb14dfaa85ac1cb104e5665db474d639cd0b2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.350241   16187 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2
	I0126 19:43:02.350261   16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0126 19:43:02.454254   16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2 ...
	I0126 19:43:02.454275   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2: {Name:mka3b43595548da8b0105a569e9eb6ac90195069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.454559   16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2 ...
	I0126 19:43:02.454569   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2: {Name:mk73ebcee90bed93a03a9ffc206cc5005dbc71e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.454748   16187 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt
	I0126 19:43:02.454933   16187 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key
	I0126 19:43:02.455194   16187 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key
	I0126 19:43:02.455222   16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt with IP's: []
	I0126 19:43:02.521212   16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt ...
	I0126 19:43:02.521230   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt: {Name:mk65a44fb27fa55ccd5799764b8abbdde192f657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.521524   16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key ...
	I0126 19:43:02.521533   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key: {Name:mk7c73892718b6261c7ef58c42a9cfc9860fadd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:02.521967   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem (1338 bytes)
	W0126 19:43:02.522022   16187 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083_empty.pem, impossibly tiny 0 bytes
	I0126 19:43:02.522032   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1679 bytes)
	I0126 19:43:02.522070   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0126 19:43:02.522109   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0126 19:43:02.522143   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1679 bytes)
	I0126 19:43:02.522218   16187 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem (1708 bytes)
	I0126 19:43:02.523176   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0126 19:43:02.540724   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0126 19:43:02.560949   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0126 19:43:02.577831   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/false-20220126194239-2083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0126 19:43:02.594684   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0126 19:43:02.612559   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0126 19:43:02.630371   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0126 19:43:02.647386   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0126 19:43:02.664260   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /usr/share/ca-certificates/20832.pem (1708 bytes)
	I0126 19:43:02.681944   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0126 19:43:02.699036   16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem --> /usr/share/ca-certificates/2083.pem (1338 bytes)
	I0126 19:43:02.716299   16187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0126 19:43:02.729500   16187 ssh_runner.go:195] Run: openssl version
	I0126 19:43:02.734998   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20832.pem && ln -fs /usr/share/ca-certificates/20832.pem /etc/ssl/certs/20832.pem"
	I0126 19:43:02.742793   16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20832.pem
	I0126 19:43:02.746985   16187 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:49 /usr/share/ca-certificates/20832.pem
	I0126 19:43:02.747034   16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20832.pem
	I0126 19:43:02.753036   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20832.pem /etc/ssl/certs/3ec20f2e.0"
	I0126 19:43:02.760655   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0126 19:43:02.768289   16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:43:02.772749   16187 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:43 /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:43:02.772801   16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0126 19:43:02.778530   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0126 19:43:02.786671   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2083.pem && ln -fs /usr/share/ca-certificates/2083.pem /etc/ssl/certs/2083.pem"
	I0126 19:43:02.794929   16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2083.pem
	I0126 19:43:02.799103   16187 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:49 /usr/share/ca-certificates/2083.pem
	I0126 19:43:02.799157   16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2083.pem
	I0126 19:43:02.804688   16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2083.pem /etc/ssl/certs/51391683.0"
	I0126 19:43:02.813601   16187 kubeadm.go:388] StartCluster: {Name:false-20220126194239-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:false-20220126194239-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 19:43:02.813750   16187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0126 19:43:02.840880   16187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0126 19:43:02.849308   16187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0126 19:43:02.856711   16187 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0126 19:43:02.856763   16187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0126 19:43:02.863969   16187 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0126 19:43:02.863991   16187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0126 19:43:03.353659   16187 out.go:203]   - Generating certificates and keys ...
	I0126 19:43:01.030761   15806 out.go:203]   - Booting up control plane ...
	I0126 19:43:05.421713   16187 out.go:203]   - Booting up control plane ...
	I0126 19:43:13.073748   15806 out.go:203]   - Configuring RBAC rules ...
	I0126 19:43:13.457511   15806 cni.go:93] Creating CNI manager for ""
	I0126 19:43:13.457522   15806 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0126 19:43:13.457553   15806 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0126 19:43:13.457664   15806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 19:43:13.457670   15806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=df496161bea02a920f5582b36f44351d955cdf25 minikube.k8s.io/name=force-systemd-env-20220126194210-2083 minikube.k8s.io/updated_at=2022_01_26T19_43_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 19:43:13.696014   15806 ops.go:34] apiserver oom_adj: -16
	I0126 19:43:13.696077   15806 kubeadm.go:867] duration metric: took 238.501312ms to wait for elevateKubeSystemPrivileges.
	I0126 19:43:13.696092   15806 kubeadm.go:390] StartCluster complete in 15.630716854s
	I0126 19:43:13.696116   15806 settings.go:142] acquiring lock: {Name:mkb44f1d9eb2a533b4b0cb7d08d08147a57d8376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:13.696216   15806 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 19:43:13.696967   15806 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk2720725a2c48b74a1f04b19ffbd0e9d0a29d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:13.697628   15806 kapi.go:59] client config for force-systemd-env-20220126194210-2083: &rest.Config{Host:"https://127.0.0.1:58200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-202
20126194210-2083/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21cd280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0126 19:43:13.697959   15806 cert_rotation.go:137] Starting client certificate rotation controller
	I0126 19:43:14.224384   15806 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "force-systemd-env-20220126194210-2083" rescaled to 1
	I0126 19:43:14.224415   15806 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0126 19:43:14.224430   15806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0126 19:43:14.224440   15806 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0126 19:43:14.273076   15806 out.go:176] * Verifying Kubernetes components...
	I0126 19:43:14.224496   15806 addons.go:65] Setting storage-provisioner=true in profile "force-systemd-env-20220126194210-2083"
	I0126 19:43:14.224527   15806 addons.go:65] Setting default-storageclass=true in profile "force-systemd-env-20220126194210-2083"
	I0126 19:43:14.273122   15806 addons.go:153] Setting addon storage-provisioner=true in "force-systemd-env-20220126194210-2083"
	W0126 19:43:14.273132   15806 addons.go:165] addon storage-provisioner should already be in state true
	I0126 19:43:14.224599   15806 config.go:176] Loaded profile config "force-systemd-env-20220126194210-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:43:14.273156   15806 host.go:66] Checking if "force-systemd-env-20220126194210-2083" exists ...
	I0126 19:43:14.273154   15806 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-20220126194210-2083"
	I0126 19:43:14.273165   15806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0126 19:43:14.273456   15806 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220126194210-2083 --format={{.State.Status}}
	I0126 19:43:14.273564   15806 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220126194210-2083 --format={{.State.Status}}
	I0126 19:43:14.276459   15806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0126 19:43:14.289587   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:43:14.424730   15806 kapi.go:59] client config for force-systemd-env-20220126194210-2083: &rest.Config{Host:"https://127.0.0.1:58200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-202
20126194210-2083/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21cd280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0126 19:43:14.450672   15806 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0126 19:43:14.450842   15806 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 19:43:14.450852   15806 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0126 19:43:14.450923   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:43:14.459043   15806 addons.go:153] Setting addon default-storageclass=true in "force-systemd-env-20220126194210-2083"
	W0126 19:43:14.459077   15806 addons.go:165] addon default-storageclass should already be in state true
	I0126 19:43:14.459114   15806 host.go:66] Checking if "force-systemd-env-20220126194210-2083" exists ...
	I0126 19:43:14.459506   15806 cli_runner.go:133] Run: docker container inspect force-systemd-env-20220126194210-2083 --format={{.State.Status}}
	I0126 19:43:14.463033   15806 kapi.go:59] client config for force-systemd-env-20220126194210-2083: &rest.Config{Host:"https://127.0.0.1:58200", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-20220126194210-2083/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/force-systemd-env-202
20126194210-2083/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21cd280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0126 19:43:14.466906   15806 api_server.go:51] waiting for apiserver process to appear ...
	I0126 19:43:14.466973   15806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0126 19:43:14.589694   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:43:14.598725   15806 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0126 19:43:14.598744   15806 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0126 19:43:14.598862   15806 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220126194210-2083
	I0126 19:43:14.695908   15806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 19:43:14.732895   15806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58194 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/force-systemd-env-20220126194210-2083/id_rsa Username:docker}
	I0126 19:43:14.881450   15806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0126 19:43:14.972745   15806 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0126 19:43:14.972780   15806 api_server.go:71] duration metric: took 748.348474ms to wait for apiserver process to appear ...
	I0126 19:43:14.972795   15806 api_server.go:87] waiting for apiserver healthz status ...
	I0126 19:43:14.972810   15806 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58200/healthz ...
	I0126 19:43:14.983671   15806 api_server.go:266] https://127.0.0.1:58200/healthz returned 200:
	ok
	I0126 19:43:14.985159   15806 api_server.go:140] control plane version: v1.23.2
	I0126 19:43:14.985180   15806 api_server.go:130] duration metric: took 12.375257ms to wait for apiserver health ...
	I0126 19:43:14.985195   15806 system_pods.go:43] waiting for kube-system pods to appear ...
	I0126 19:43:14.992268   15806 system_pods.go:59] 2 kube-system pods found
	I0126 19:43:14.992288   15806 system_pods.go:61] "kube-controller-manager-force-systemd-env-20220126194210-2083" [cceca706-03ec-4e78-a2a2-7f3a1699264e] Pending
	I0126 19:43:14.992292   15806 system_pods.go:61] "kube-scheduler-force-systemd-env-20220126194210-2083" [0d555c19-de30-40cb-b2e7-71b3a5bd41fc] Pending
	I0126 19:43:14.992295   15806 system_pods.go:74] duration metric: took 7.094729ms to wait for pod list to return data ...
	I0126 19:43:14.992301   15806 kubeadm.go:542] duration metric: took 767.870516ms to wait for : map[apiserver:true system_pods:true] ...
	I0126 19:43:14.992309   15806 node_conditions.go:102] verifying NodePressure condition ...
	I0126 19:43:14.997470   15806 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0126 19:43:14.997486   15806 node_conditions.go:123] node cpu capacity is 6
	I0126 19:43:14.997497   15806 node_conditions.go:105] duration metric: took 5.183067ms to run NodePressure ...
	I0126 19:43:14.997504   15806 start.go:213] waiting for startup goroutines ...
	I0126 19:43:15.095882   15806 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0126 19:43:15.095932   15806 addons.go:417] enableAddons completed in 871.494958ms
	I0126 19:43:15.171554   15806 start.go:496] kubectl: 1.19.7, cluster: 1.23.2 (minor skew: 4)
	I0126 19:43:15.261516   15806 out.go:176] 
	W0126 19:43:15.261701   15806 out.go:241] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	I0126 19:43:15.324454   15806 out.go:176]   - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	I0126 19:43:15.350566   15806 out.go:176] * Done! kubectl is now configured to use "force-systemd-env-20220126194210-2083" cluster and "default" namespace by default
	I0126 19:43:16.958385   16187 out.go:203]   - Configuring RBAC rules ...
	I0126 19:43:17.344444   16187 cni.go:93] Creating CNI manager for "false"
	I0126 19:43:17.344484   16187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0126 19:43:17.344585   16187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=df496161bea02a920f5582b36f44351d955cdf25 minikube.k8s.io/name=false-20220126194239-2083 minikube.k8s.io/updated_at=2022_01_26T19_43_17_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 19:43:17.344639   16187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 19:43:17.605712   16187 kubeadm.go:867] duration metric: took 261.212373ms to wait for elevateKubeSystemPrivileges.
	I0126 19:43:17.605765   16187 ops.go:34] apiserver oom_adj: -16
	I0126 19:43:17.605778   16187 kubeadm.go:390] StartCluster complete in 14.792168658s
	I0126 19:43:17.605797   16187 settings.go:142] acquiring lock: {Name:mkb44f1d9eb2a533b4b0cb7d08d08147a57d8376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:17.605909   16187 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 19:43:17.606849   16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk2720725a2c48b74a1f04b19ffbd0e9d0a29d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:43:18.133440   16187 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20220126194239-2083" rescaled to 1
	I0126 19:43:18.133485   16187 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0126 19:43:18.133496   16187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0126 19:43:18.133510   16187 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0126 19:43:18.159770   16187 out.go:176] * Verifying Kubernetes components...
	I0126 19:43:18.133661   16187 config.go:176] Loaded profile config "false-20220126194239-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:43:18.159826   16187 addons.go:65] Setting default-storageclass=true in profile "false-20220126194239-2083"
	I0126 19:43:18.159833   16187 addons.go:65] Setting storage-provisioner=true in profile "false-20220126194239-2083"
	I0126 19:43:18.159850   16187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20220126194239-2083"
	I0126 19:43:18.159856   16187 addons.go:153] Setting addon storage-provisioner=true in "false-20220126194239-2083"
	W0126 19:43:18.159862   16187 addons.go:165] addon storage-provisioner should already be in state true
	I0126 19:43:18.159871   16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0126 19:43:18.159886   16187 host.go:66] Checking if "false-20220126194239-2083" exists ...
	I0126 19:43:18.160546   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:43:18.180077   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:43:18.193711   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:18.193746   16187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0126 19:43:18.315892   16187 addons.go:153] Setting addon default-storageclass=true in "false-20220126194239-2083"
	W0126 19:43:18.315912   16187 addons.go:165] addon default-storageclass should already be in state true
	I0126 19:43:18.315932   16187 host.go:66] Checking if "false-20220126194239-2083" exists ...
	I0126 19:43:18.316470   16187 cli_runner.go:133] Run: docker container inspect false-20220126194239-2083 --format={{.State.Status}}
	I0126 19:43:18.358709   16187 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0126 19:43:18.358863   16187 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 19:43:18.358875   16187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0126 19:43:18.358979   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:18.370404   16187 api_server.go:51] waiting for apiserver process to appear ...
	I0126 19:43:18.370461   16187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0126 19:43:18.469616   16187 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0126 19:43:18.469628   16187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0126 19:43:18.469718   16187 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220126194239-2083
	I0126 19:43:18.501219   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:18.598643   16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58662 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/false-20220126194239-2083/id_rsa Username:docker}
	I0126 19:43:18.607810   16187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 19:43:18.708608   16187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0126 19:43:18.976809   16187 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0126 19:43:18.976869   16187 api_server.go:71] duration metric: took 843.363854ms to wait for apiserver process to appear ...
	I0126 19:43:18.976885   16187 api_server.go:87] waiting for apiserver healthz status ...
	I0126 19:43:18.976898   16187 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58661/healthz ...
	I0126 19:43:18.984821   16187 api_server.go:266] https://127.0.0.1:58661/healthz returned 200:
	ok
	I0126 19:43:18.986256   16187 api_server.go:140] control plane version: v1.23.2
	I0126 19:43:18.986279   16187 api_server.go:130] duration metric: took 9.387022ms to wait for apiserver health ...
	I0126 19:43:18.986290   16187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0126 19:43:18.993753   16187 system_pods.go:59] 4 kube-system pods found
	I0126 19:43:18.993769   16187 system_pods.go:61] "etcd-false-20220126194239-2083" [5d961796-78c4-4a12-a763-7c511ecbdcfd] Pending
	I0126 19:43:18.993773   16187 system_pods.go:61] "kube-apiserver-false-20220126194239-2083" [46c4d59c-4b36-436f-a0a9-a8fccf0d9b5e] Pending
	I0126 19:43:18.993776   16187 system_pods.go:61] "kube-controller-manager-false-20220126194239-2083" [01c57f67-4273-4897-a260-b8debc638cb4] Pending
	I0126 19:43:18.993784   16187 system_pods.go:61] "kube-scheduler-false-20220126194239-2083" [a8ddb167-27b0-4d82-abce-2ab8ebed59a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0126 19:43:18.993790   16187 system_pods.go:74] duration metric: took 7.495112ms to wait for pod list to return data ...
	I0126 19:43:18.993796   16187 kubeadm.go:542] duration metric: took 860.29603ms to wait for : map[apiserver:true system_pods:true] ...
	I0126 19:43:18.993807   16187 node_conditions.go:102] verifying NodePressure condition ...
	I0126 19:43:18.997542   16187 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0126 19:43:18.997560   16187 node_conditions.go:123] node cpu capacity is 6
	I0126 19:43:18.997573   16187 node_conditions.go:105] duration metric: took 3.760208ms to run NodePressure ...
	I0126 19:43:18.997581   16187 start.go:213] waiting for startup goroutines ...
	I0126 19:43:19.044196   16187 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0126 19:43:19.044213   16187 addons.go:417] enableAddons completed in 910.711919ms
	I0126 19:43:19.103103   16187 start.go:496] kubectl: 1.19.7, cluster: 1.23.2 (minor skew: 4)
	I0126 19:43:19.129240   16187 out.go:176] 
	W0126 19:43:19.129391   16187 out.go:241] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	I0126 19:43:19.176018   16187 out.go:176]   - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	I0126 19:43:19.202330   16187 out.go:176] * Done! kubectl is now configured to use "false-20220126194239-2083" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-01-27 03:42:50 UTC, end at Thu 2022-01-27 03:43:21 UTC. --
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[220]: time="2022-01-27T03:42:54.223125088Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[220]: time="2022-01-27T03:42:54.224111503Z" level=info msg="Daemon shutdown complete"
	Jan 27 03:42:54 false-20220126194239-2083 systemd[1]: docker.service: Succeeded.
	Jan 27 03:42:54 false-20220126194239-2083 systemd[1]: Stopped Docker Application Container Engine.
	Jan 27 03:42:54 false-20220126194239-2083 systemd[1]: Starting Docker Application Container Engine...
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.266434077Z" level=info msg="Starting up"
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.268344909Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.268377647Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.268397659Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.268405678Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.269570176Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.269605699Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.269620928Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.269629937Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.273502794Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.277813867Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.277872605Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 27 03:42:54 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:54.278089530Z" level=info msg="Loading containers: start."
	Jan 27 03:42:56 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:42:56.373015014Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 27 03:43:00 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:43:00.175768723Z" level=info msg="Loading containers: done."
	Jan 27 03:43:00 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:43:00.189477246Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12
	Jan 27 03:43:00 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:43:00.189582654Z" level=info msg="Daemon has completed initialization"
	Jan 27 03:43:00 false-20220126194239-2083 systemd[1]: Started Docker Application Container Engine.
	Jan 27 03:43:00 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:43:00.217581908Z" level=info msg="API listen on [::]:2376"
	Jan 27 03:43:00 false-20220126194239-2083 dockerd[468]: time="2022-01-27T03:43:00.220699674Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2a925c0f1c657       4783639ba7e03       10 seconds ago      Running             kube-controller-manager   0                   b90f5563d521f
	d0fce80d2b31d       25f8c7f3da61c       10 seconds ago      Running             etcd                      0                   288d6cae59f39
	278b5f0768ff7       6114d758d6d16       10 seconds ago      Running             kube-scheduler            0                   cd7a7188beb68
	d3e7aef4b67a4       8a0228dd6a683       10 seconds ago      Running             kube-apiserver            0                   6c92323749de3
	
	* 
	* ==> describe nodes <==
	* Name:               false-20220126194239-2083
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=false-20220126194239-2083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=df496161bea02a920f5582b36f44351d955cdf25
	                    minikube.k8s.io/name=false-20220126194239-2083
	                    minikube.k8s.io/updated_at=2022_01_26T19_43_17_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 27 Jan 2022 03:43:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  false-20220126194239-2083
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 27 Jan 2022 03:43:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 27 Jan 2022 03:43:17 +0000   Thu, 27 Jan 2022 03:43:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 27 Jan 2022 03:43:17 +0000   Thu, 27 Jan 2022 03:43:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 27 Jan 2022 03:43:17 +0000   Thu, 27 Jan 2022 03:43:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 27 Jan 2022 03:43:17 +0000   Thu, 27 Jan 2022 03:43:17 +0000   KubeletNotReady              [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    false-20220126194239-2083
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                efcaa5a2-d22c-4cc7-a2d2-5b0804837124
	  Boot ID:                    33c91ce8-d5dd-4418-afe7-50c8d1fb0231
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.12
	  Kubelet Version:            v1.23.2
	  Kube-Proxy Version:         v1.23.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-false-20220126194239-2083                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-false-20220126194239-2083             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-false-20220126194239-2083    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-false-20220126194239-2083             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 15s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  15s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14s (x5 over 15s)  kubelet  Node false-20220126194239-2083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x5 over 15s)  kubelet  Node false-20220126194239-2083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x5 over 15s)  kubelet  Node false-20220126194239-2083 status is now: NodeHasSufficientPID
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node false-20220126194239-2083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node false-20220126194239-2083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node false-20220126194239-2083 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4s                 kubelet  Node false-20220126194239-2083 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.033012] bpfilter: read fail 0
	[  +0.030750] bpfilter: write fail -32
	[  +0.030830] bpfilter: read fail 0
	[  +0.030086] bpfilter: read fail 0
	[  +0.028965] bpfilter: read fail 0
	[  +0.028389] bpfilter: write fail -32
	[  +0.042271] bpfilter: read fail 0
	[  +0.029337] bpfilter: read fail 0
	[  +0.031509] bpfilter: read fail 0
	[  +0.030525] bpfilter: read fail 0
	[  +0.029922] bpfilter: read fail 0
	[  +0.029030] bpfilter: read fail 0
	[  +0.031120] bpfilter: read fail 0
	[  +0.030806] bpfilter: read fail 0
	[  +0.045985] bpfilter: read fail 0
	[  +0.030335] bpfilter: read fail 0
	[  +0.031775] bpfilter: read fail 0
	[  +0.022998] bpfilter: read fail 0
	[  +0.029367] bpfilter: read fail 0
	[  +0.027242] bpfilter: read fail 0
	[  +0.032065] bpfilter: read fail 0
	[  +0.029398] bpfilter: read fail 0
	[  +0.029501] bpfilter: read fail 0
	[  +0.034454] bpfilter: read fail 0
	[  +0.026832] bpfilter: read fail 0
	
	* 
	* ==> etcd [d0fce80d2b31] <==
	* {"level":"info","ts":"2022-01-27T03:43:11.417Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-01-27T03:43:11.417Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-01-27T03:43:11.417Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-01-27T03:43:12.225Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-27T03:43:12.226Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-27T03:43:12.226Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-27T03:43:12.226Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-27T03:43:12.226Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:false-20220126194239-2083 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-01-27T03:43:12.226Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-01-27T03:43:12.227Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-01-27T03:43:12.227Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-01-27T03:43:12.227Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-01-27T03:43:12.229Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-01-27T03:43:12.229Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2022-01-27T03:43:15.437Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"147.063488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-01-27T03:43:15.437Z","caller":"traceutil/trace.go:171","msg":"trace[905971290] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:173; }","duration":"147.22788ms","start":"2022-01-27T03:43:15.289Z","end":"2022-01-27T03:43:15.437Z","steps":["trace[905971290] 'agreement among raft nodes before linearized reading'  (duration: 74.03754ms)","trace[905971290] 'range keys from in-memory index tree'  (duration: 72.981036ms)"],"step_count":2}
	{"level":"warn","ts":"2022-01-27T03:43:15.437Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"159.994336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:namespace-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-01-27T03:43:15.437Z","caller":"traceutil/trace.go:171","msg":"trace[1836944180] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:namespace-controller; range_end:; response_count:0; response_revision:173; }","duration":"160.185426ms","start":"2022-01-27T03:43:15.277Z","end":"2022-01-27T03:43:15.437Z","steps":["trace[1836944180] 'agreement among raft nodes before linearized reading'  (duration: 86.974387ms)","trace[1836944180] 'range keys from in-memory index tree'  (duration: 73.006841ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  03:43:21 up  1:01,  0 users,  load average: 3.24, 2.85, 2.34
	Linux false-20220126194239-2083 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [d3e7aef4b67a] <==
	* I0127 03:43:13.934819       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0127 03:43:13.981282       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0127 03:43:13.991737       1 controller.go:611] quota admission added evaluator for: namespaces
	I0127 03:43:13.997611       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 03:43:13.997878       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 03:43:13.999934       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 03:43:14.000101       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0127 03:43:14.000512       1 cache.go:39] Caches are synced for autoregister controller
	I0127 03:43:14.022746       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0127 03:43:14.897603       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0127 03:43:14.903295       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0127 03:43:14.904091       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 03:43:14.905974       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0127 03:43:14.906028       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0127 03:43:15.497174       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 03:43:15.527912       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 03:43:15.600047       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0127 03:43:15.604569       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0127 03:43:15.605331       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 03:43:15.609015       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 03:43:16.039978       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 03:43:17.134404       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 03:43:17.146096       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0127 03:43:17.156245       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 03:43:17.391266       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	
	* 
	* ==> kube-controller-manager [2a925c0f1c65] <==
	* I0127 03:43:17.889068       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0127 03:43:17.889084       1 graph_builder.go:289] GraphBuilder running
	I0127 03:43:17.889101       1 controllermanager.go:605] Started "garbagecollector"
	I0127 03:43:18.140337       1 controllermanager.go:605] Started "attachdetach"
	I0127 03:43:18.140399       1 attach_detach_controller.go:328] Starting attach detach controller
	I0127 03:43:18.140405       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0127 03:43:18.290786       1 controllermanager.go:605] Started "pvc-protection"
	I0127 03:43:18.290827       1 pvc_protection_controller.go:103] "Starting PVC protection controller"
	I0127 03:43:18.290836       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0127 03:43:18.439937       1 controllermanager.go:605] Started "ephemeral-volume"
	I0127 03:43:18.440010       1 controller.go:170] Starting ephemeral volume controller
	I0127 03:43:18.440022       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0127 03:43:18.588466       1 controllermanager.go:605] Started "podgc"
	I0127 03:43:18.588600       1 gc_controller.go:89] Starting GC controller
	I0127 03:43:18.588617       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0127 03:43:18.891984       1 controllermanager.go:605] Started "namespace"
	I0127 03:43:18.892068       1 namespace_controller.go:200] Starting namespace controller
	I0127 03:43:18.892079       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0127 03:43:19.038062       1 controllermanager.go:605] Started "disruption"
	I0127 03:43:19.038078       1 disruption.go:363] Starting disruption controller
	I0127 03:43:19.038115       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0127 03:43:19.189259       1 controllermanager.go:605] Started "statefulset"
	I0127 03:43:19.189363       1 stateful_set.go:147] Starting stateful set controller
	I0127 03:43:19.189379       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0127 03:43:19.287115       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [278b5f0768ff] <==
	* W0127 03:43:13.985787       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:43:13.985863       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0127 03:43:13.985927       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:43:13.985956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0127 03:43:13.986015       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 03:43:13.986067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0127 03:43:13.986610       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 03:43:13.986694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0127 03:43:13.986948       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 03:43:13.986980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0127 03:43:13.989760       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 03:43:13.989827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0127 03:43:14.855315       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 03:43:14.855349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0127 03:43:14.875294       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 03:43:14.875359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0127 03:43:15.058164       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:43:15.058199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0127 03:43:15.085605       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:43:15.085661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0127 03:43:15.188966       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:43:15.189002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0127 03:43:15.370759       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:43:15.370824       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 03:43:17.178551       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-01-27 03:42:50 UTC, end at Thu 2022-01-27 03:43:22 UTC. --
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: E0127 03:43:18.115512    1956 kubelet.go:2001] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.216298    1956 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.216420    1956 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.216459    1956 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.216534    1956 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290333    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-flexvolume-dir\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290545    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-kubeconfig\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290577    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-usr-local-share-ca-certificates\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290594    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/35c2956fc6d24f27767ce3bcd164594a-etcd-certs\") pod \"etcd-false-20220126194239-2083\" (UID: \"35c2956fc6d24f27767ce3bcd164594a\") " pod="kube-system/etcd-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290613    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c76bffc448d84cec22431e19be34894-k8s-certs\") pod \"kube-apiserver-false-20220126194239-2083\" (UID: \"2c76bffc448d84cec22431e19be34894\") " pod="kube-system/kube-apiserver-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290745    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c76bffc448d84cec22431e19be34894-usr-share-ca-certificates\") pod \"kube-apiserver-false-20220126194239-2083\" (UID: \"2c76bffc448d84cec22431e19be34894\") " pod="kube-system/kube-apiserver-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290811    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-ca-certs\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.290976    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c76bffc448d84cec22431e19be34894-etc-ca-certificates\") pod \"kube-apiserver-false-20220126194239-2083\" (UID: \"2c76bffc448d84cec22431e19be34894\") " pod="kube-system/kube-apiserver-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291006    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-k8s-certs\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291078    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/35c2956fc6d24f27767ce3bcd164594a-etcd-data\") pod \"etcd-false-20220126194239-2083\" (UID: \"35c2956fc6d24f27767ce3bcd164594a\") " pod="kube-system/etcd-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291097    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c76bffc448d84cec22431e19be34894-ca-certs\") pod \"kube-apiserver-false-20220126194239-2083\" (UID: \"2c76bffc448d84cec22431e19be34894\") " pod="kube-system/kube-apiserver-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291160    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c76bffc448d84cec22431e19be34894-usr-local-share-ca-certificates\") pod \"kube-apiserver-false-20220126194239-2083\" (UID: \"2c76bffc448d84cec22431e19be34894\") " pod="kube-system/kube-apiserver-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291183    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-etc-ca-certificates\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291198    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d1742100b8c78906e9b57446dcf3f8f-usr-share-ca-certificates\") pod \"kube-controller-manager-false-20220126194239-2083\" (UID: \"4d1742100b8c78906e9b57446dcf3f8f\") " pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.291213    1956 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6280b823248d93e8d96a2a6b1ecc8acb-kubeconfig\") pod \"kube-scheduler-false-20220126194239-2083\" (UID: \"6280b823248d93e8d96a2a6b1ecc8acb\") " pod="kube-system/kube-scheduler-false-20220126194239-2083"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.346220    1956 apiserver.go:52] "Watching apiserver"
	Jan 27 03:43:18 false-20220126194239-2083 kubelet[1956]: I0127 03:43:18.593550    1956 reconciler.go:157] "Reconciler: start to sync state"
	Jan 27 03:43:19 false-20220126194239-2083 kubelet[1956]: E0127 03:43:19.349721    1956 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-false-20220126194239-2083\" already exists" pod="kube-system/kube-controller-manager-false-20220126194239-2083"
	Jan 27 03:43:19 false-20220126194239-2083 kubelet[1956]: E0127 03:43:19.550514    1956 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-false-20220126194239-2083\" already exists" pod="kube-system/kube-apiserver-false-20220126194239-2083"
	Jan 27 03:43:19 false-20220126194239-2083 kubelet[1956]: E0127 03:43:19.750635    1956 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-false-20220126194239-2083\" already exists" pod="kube-system/etcd-false-20220126194239-2083"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p false-20220126194239-2083 -n false-20220126194239-2083
helpers_test.go:262: (dbg) Run:  kubectl --context false-20220126194239-2083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:262: (dbg) Done: kubectl --context false-20220126194239-2083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.71573378s)
helpers_test.go:271: non-running pods: etcd-false-20220126194239-2083 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/false]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context false-20220126194239-2083 describe pod etcd-false-20220126194239-2083 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context false-20220126194239-2083 describe pod etcd-false-20220126194239-2083 storage-provisioner: exit status 1 (52.982559ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-false-20220126194239-2083" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context false-20220126194239-2083 describe pod etcd-false-20220126194239-2083 storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "false-20220126194239-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p false-20220126194239-2083

                                                
                                                
=== CONT  TestNetworkPlugins/group/false
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p false-20220126194239-2083: (15.436441315s)
--- FAIL: TestNetworkPlugins/group/false (60.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220126194237-2083 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:252: expected cni network plugin with conatinerd/crio, got 1967 /var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20220126194237-2083 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
--- FAIL: TestNetworkPlugins/group/auto/KubeletFlags (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (551.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20220126194339-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p custom-weave-20220126194339-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : exit status 105 (9m11.097083589s)

                                                
                                                
-- stdout --
	* [custom-weave-20220126194339-2083] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node custom-weave-20220126194339-2083 in cluster custom-weave-20220126194339-2083
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0126 19:59:31.745320   21374 out.go:297] Setting OutFile to fd 1 ...
	I0126 19:59:31.745467   21374 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:59:31.745472   21374 out.go:310] Setting ErrFile to fd 2...
	I0126 19:59:31.745475   21374 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:59:31.745544   21374 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 19:59:31.745865   21374 out.go:304] Setting JSON to false
	I0126 19:59:31.770551   21374 start.go:112] hostinfo: {"hostname":"37309.local","uptime":5346,"bootTime":1643250625,"procs":339,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 19:59:31.770655   21374 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 19:59:31.797792   21374 out.go:176] * [custom-weave-20220126194339-2083] minikube v1.25.1 on Darwin 11.2.3
	I0126 19:59:31.844322   21374 out.go:176]   - MINIKUBE_LOCATION=13251
	I0126 19:59:31.797950   21374 notify.go:174] Checking for updates...
	I0126 19:59:31.870520   21374 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 19:59:31.896692   21374 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0126 19:59:31.924251   21374 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0126 19:59:31.949338   21374 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0126 19:59:31.949850   21374 config.go:176] Loaded profile config "auto-20220126194237-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:59:31.949914   21374 driver.go:344] Setting default libvirt URI to qemu:///system
	I0126 19:59:32.050431   21374 docker.go:132] docker version: linux-20.10.6
	I0126 19:59:32.050571   21374 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 19:59:32.238220   21374 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:59:32.166831059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 19:59:32.264991   21374 out.go:176] * Using the docker driver based on user configuration
	I0126 19:59:32.265020   21374 start.go:281] selected driver: docker
	I0126 19:59:32.265027   21374 start.go:798] validating driver "docker" against <nil>
	I0126 19:59:32.265048   21374 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0126 19:59:32.267675   21374 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 19:59:32.453478   21374 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:59:32.380470334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 19:59:32.453611   21374 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0126 19:59:32.453741   21374 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0126 19:59:32.453759   21374 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0126 19:59:32.453777   21374 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0126 19:59:32.453796   21374 start_flags.go:297] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0126 19:59:32.453806   21374 start_flags.go:302] config:
	{Name:custom-weave-20220126194339-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:custom-weave-20220126194339-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 19:59:32.502072   21374 out.go:176] * Starting control plane node custom-weave-20220126194339-2083 in cluster custom-weave-20220126194339-2083
	I0126 19:59:32.502129   21374 cache.go:120] Beginning downloading kic base image for docker with docker
	I0126 19:59:32.528092   21374 out.go:176] * Pulling base image ...
	I0126 19:59:32.528132   21374 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:59:32.528167   21374 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0126 19:59:32.528185   21374 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0126 19:59:32.528205   21374 cache.go:57] Caching tarball of preloaded images
	I0126 19:59:32.528347   21374 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0126 19:59:32.528363   21374 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0126 19:59:32.529079   21374 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/config.json ...
	I0126 19:59:32.529188   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/config.json: {Name:mkbe527d3c591d80d03192ca26ebe50330d6b14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 19:59:32.643773   21374 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0126 19:59:32.643790   21374 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0126 19:59:32.643799   21374 cache.go:208] Successfully downloaded all kic artifacts
	I0126 19:59:32.643831   21374 start.go:313] acquiring machines lock for custom-weave-20220126194339-2083: {Name:mkf5a77014a9379a5d1f431c9468b2104a5c05ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 19:59:32.643967   21374 start.go:317] acquired machines lock for "custom-weave-20220126194339-2083" in 124.953µs
	I0126 19:59:32.643996   21374 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20220126194339-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:custom-weave-20220126194339-2083 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0126 19:59:32.644079   21374 start.go:126] createHost starting for "" (driver="docker")
	I0126 19:59:32.691356   21374 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0126 19:59:32.691742   21374 start.go:160] libmachine.API.Create for "custom-weave-20220126194339-2083" (driver="docker")
	I0126 19:59:32.691786   21374 client.go:168] LocalClient.Create starting
	I0126 19:59:32.691982   21374 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem
	I0126 19:59:32.692061   21374 main.go:130] libmachine: Decoding PEM data...
	I0126 19:59:32.692093   21374 main.go:130] libmachine: Parsing certificate...
	I0126 19:59:32.692195   21374 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem
	I0126 19:59:32.692247   21374 main.go:130] libmachine: Decoding PEM data...
	I0126 19:59:32.692267   21374 main.go:130] libmachine: Parsing certificate...
	I0126 19:59:32.693100   21374 cli_runner.go:133] Run: docker network inspect custom-weave-20220126194339-2083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0126 19:59:32.804353   21374 cli_runner.go:180] docker network inspect custom-weave-20220126194339-2083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0126 19:59:32.804459   21374 network_create.go:254] running [docker network inspect custom-weave-20220126194339-2083] to gather additional debugging logs...
	I0126 19:59:32.804478   21374 cli_runner.go:133] Run: docker network inspect custom-weave-20220126194339-2083
	W0126 19:59:32.916876   21374 cli_runner.go:180] docker network inspect custom-weave-20220126194339-2083 returned with exit code 1
	I0126 19:59:32.916899   21374 network_create.go:257] error running [docker network inspect custom-weave-20220126194339-2083]: docker network inspect custom-weave-20220126194339-2083: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220126194339-2083
	I0126 19:59:32.916912   21374 network_create.go:259] output of [docker network inspect custom-weave-20220126194339-2083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220126194339-2083
	
	** /stderr **
	I0126 19:59:32.917007   21374 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0126 19:59:33.029391   21374 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000724398] misses:0}
	I0126 19:59:33.029428   21374 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0126 19:59:33.029446   21374 network_create.go:106] attempt to create docker network custom-weave-20220126194339-2083 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0126 19:59:33.029522   21374 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220126194339-2083
	I0126 19:59:40.925850   21374 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220126194339-2083: (7.896194489s)
	I0126 19:59:40.925873   21374 network_create.go:90] docker network custom-weave-20220126194339-2083 192.168.49.0/24 created
	I0126 19:59:40.925889   21374 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20220126194339-2083" container
	I0126 19:59:40.926003   21374 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0126 19:59:41.037989   21374 cli_runner.go:133] Run: docker volume create custom-weave-20220126194339-2083 --label name.minikube.sigs.k8s.io=custom-weave-20220126194339-2083 --label created_by.minikube.sigs.k8s.io=true
	I0126 19:59:41.151338   21374 oci.go:102] Successfully created a docker volume custom-weave-20220126194339-2083
	I0126 19:59:41.151473   21374 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220126194339-2083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220126194339-2083 --entrypoint /usr/bin/test -v custom-weave-20220126194339-2083:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0126 19:59:41.651324   21374 oci.go:106] Successfully prepared a docker volume custom-weave-20220126194339-2083
	I0126 19:59:41.651374   21374 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 19:59:41.651388   21374 kic.go:179] Starting extracting preloaded images to volume ...
	I0126 19:59:41.651529   21374 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220126194339-2083:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0126 19:59:47.653732   21374 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220126194339-2083:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (6.00209601s)
	I0126 19:59:47.653754   21374 kic.go:188] duration metric: took 6.002301 seconds to extract preloaded images to volume
	I0126 19:59:47.653870   21374 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0126 19:59:47.836026   21374 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220126194339-2083 --name custom-weave-20220126194339-2083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220126194339-2083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220126194339-2083 --network custom-weave-20220126194339-2083 --ip 192.168.49.2 --volume custom-weave-20220126194339-2083:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0126 20:00:01.031142   21374 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220126194339-2083 --name custom-weave-20220126194339-2083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220126194339-2083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220126194339-2083 --network custom-weave-20220126194339-2083 --ip 192.168.49.2 --volume custom-weave-20220126194339-2083:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (13.194912579s)
	I0126 20:00:01.031266   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Running}}
	I0126 20:00:01.149551   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Status}}
	I0126 20:00:01.267212   21374 cli_runner.go:133] Run: docker exec custom-weave-20220126194339-2083 stat /var/lib/dpkg/alternatives/iptables
	I0126 20:00:01.465474   21374 oci.go:281] the created container "custom-weave-20220126194339-2083" has a running status.
	I0126 20:00:01.465503   21374 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa...
	I0126 20:00:01.572372   21374 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0126 20:00:01.756181   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Status}}
	I0126 20:00:01.876873   21374 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0126 20:00:01.876893   21374 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220126194339-2083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0126 20:00:02.041323   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Status}}
	I0126 20:00:02.157691   21374 machine.go:88] provisioning docker machine ...
	I0126 20:00:02.157746   21374 ubuntu.go:169] provisioning hostname "custom-weave-20220126194339-2083"
	I0126 20:00:02.157869   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:02.274609   21374 main.go:130] libmachine: Using SSH client type: native
	I0126 20:00:02.274825   21374 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 51838 <nil> <nil>}
	I0126 20:00:02.274839   21374 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220126194339-2083 && echo "custom-weave-20220126194339-2083" | sudo tee /etc/hostname
	I0126 20:00:02.420095   21374 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20220126194339-2083
	
	I0126 20:00:02.420183   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:02.542977   21374 main.go:130] libmachine: Using SSH client type: native
	I0126 20:00:02.543151   21374 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 51838 <nil> <nil>}
	I0126 20:00:02.543206   21374 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220126194339-2083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220126194339-2083/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220126194339-2083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0126 20:00:02.683995   21374 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0126 20:00:02.684017   21374 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube}
	I0126 20:00:02.684038   21374 ubuntu.go:177] setting up certificates
	I0126 20:00:02.684080   21374 provision.go:83] configureAuth start
	I0126 20:00:02.684206   21374 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220126194339-2083
	I0126 20:00:02.812667   21374 provision.go:138] copyHostCerts
	I0126 20:00:02.812770   21374 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem, removing ...
	I0126 20:00:02.812781   21374 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
	I0126 20:00:02.812893   21374 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem (1078 bytes)
	I0126 20:00:02.813082   21374 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem, removing ...
	I0126 20:00:02.813095   21374 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
	I0126 20:00:02.813177   21374 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem (1123 bytes)
	I0126 20:00:02.813334   21374 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem, removing ...
	I0126 20:00:02.813341   21374 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
	I0126 20:00:02.813403   21374 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem (1679 bytes)
	I0126 20:00:02.813541   21374 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220126194339-2083 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220126194339-2083]
	I0126 20:00:02.975864   21374 provision.go:172] copyRemoteCerts
	I0126 20:00:02.975922   21374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0126 20:00:02.975985   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:03.106201   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:03.206825   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0126 20:00:03.228681   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0126 20:00:03.249796   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0126 20:00:03.271540   21374 provision.go:86] duration metric: configureAuth took 587.436471ms
	I0126 20:00:03.271557   21374 ubuntu.go:193] setting minikube options for container-runtime
	I0126 20:00:03.271723   21374 config.go:176] Loaded profile config "custom-weave-20220126194339-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 20:00:03.271814   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:03.402785   21374 main.go:130] libmachine: Using SSH client type: native
	I0126 20:00:03.402938   21374 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 51838 <nil> <nil>}
	I0126 20:00:03.402954   21374 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0126 20:00:03.539697   21374 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0126 20:00:03.539717   21374 ubuntu.go:71] root file system type: overlay
	I0126 20:00:03.539878   21374 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0126 20:00:03.539982   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:03.663763   21374 main.go:130] libmachine: Using SSH client type: native
	I0126 20:00:03.663947   21374 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 51838 <nil> <nil>}
	I0126 20:00:03.664003   21374 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0126 20:00:03.812938   21374 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0126 20:00:03.813087   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:03.936498   21374 main.go:130] libmachine: Using SSH client type: native
	I0126 20:00:03.936686   21374 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 51838 <nil> <nil>}
	I0126 20:00:03.936703   21374 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0126 20:00:08.356498   21374 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-27 04:00:03.814967548 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0126 20:00:08.356520   21374 machine.go:91] provisioned docker machine in 6.198735318s
	I0126 20:00:08.356527   21374 client.go:171] LocalClient.Create took 35.664344903s
	I0126 20:00:08.356543   21374 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20220126194339-2083" took 35.664415743s
	I0126 20:00:08.356556   21374 start.go:267] post-start starting for "custom-weave-20220126194339-2083" (driver="docker")
	I0126 20:00:08.356561   21374 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0126 20:00:08.356639   21374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0126 20:00:08.356704   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:08.472421   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:08.566555   21374 ssh_runner.go:195] Run: cat /etc/os-release
	I0126 20:00:08.570303   21374 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0126 20:00:08.570320   21374 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0126 20:00:08.570326   21374 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0126 20:00:08.570335   21374 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0126 20:00:08.570344   21374 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/addons for local assets ...
	I0126 20:00:08.570437   21374 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files for local assets ...
	I0126 20:00:08.570587   21374 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem -> 20832.pem in /etc/ssl/certs
	I0126 20:00:08.570752   21374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0126 20:00:08.578121   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /etc/ssl/certs/20832.pem (1708 bytes)
	I0126 20:00:08.595960   21374 start.go:270] post-start completed in 239.392448ms
	I0126 20:00:08.596485   21374 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220126194339-2083
	I0126 20:00:08.714604   21374 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/config.json ...
	I0126 20:00:08.715003   21374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0126 20:00:08.715079   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:08.833051   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:08.926705   21374 start.go:129] duration metric: createHost completed in 36.282221712s
	I0126 20:00:08.926732   21374 start.go:80] releasing machines lock for "custom-weave-20220126194339-2083", held for 36.282363486s
	I0126 20:00:08.926889   21374 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220126194339-2083
	I0126 20:00:09.043409   21374 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0126 20:00:09.043426   21374 ssh_runner.go:195] Run: systemctl --version
	I0126 20:00:09.043495   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:09.043513   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:09.167070   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:09.167075   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:09.260257   21374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0126 20:00:09.459620   21374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 20:00:09.469601   21374 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0126 20:00:09.469663   21374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0126 20:00:09.479644   21374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0126 20:00:09.494124   21374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0126 20:00:09.554123   21374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0126 20:00:09.607913   21374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0126 20:00:09.617756   21374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0126 20:00:09.673232   21374 ssh_runner.go:195] Run: sudo systemctl start docker
	I0126 20:00:09.685371   21374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 20:00:09.729297   21374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0126 20:00:09.798781   21374 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0126 20:00:09.798926   21374 cli_runner.go:133] Run: docker exec -t custom-weave-20220126194339-2083 dig +short host.docker.internal
	I0126 20:00:09.978603   21374 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0126 20:00:09.978746   21374 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0126 20:00:09.982952   21374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 20:00:09.992376   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:10.132816   21374 out.go:176]   - kubelet.housekeeping-interval=5m
	I0126 20:00:10.132907   21374 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 20:00:10.132990   21374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 20:00:10.166208   21374 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 20:00:10.166224   21374 docker.go:537] Images already preloaded, skipping extraction
	I0126 20:00:10.166321   21374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0126 20:00:10.199649   21374 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0126 20:00:10.199673   21374 cache_images.go:84] Images are preloaded, skipping loading
	I0126 20:00:10.199789   21374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0126 20:00:10.281799   21374 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0126 20:00:10.281841   21374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0126 20:00:10.281855   21374 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220126194339-2083 NodeName:custom-weave-20220126194339-2083 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0126 20:00:10.281954   21374 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220126194339-2083"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0126 20:00:10.282038   21374 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220126194339-2083 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:custom-weave-20220126194339-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0126 20:00:10.282093   21374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0126 20:00:10.289920   21374 binaries.go:44] Found k8s binaries, skipping transfer
	I0126 20:00:10.289979   21374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0126 20:00:10.298030   21374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes)
	I0126 20:00:10.310523   21374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0126 20:00:10.322835   21374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0126 20:00:10.336087   21374 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0126 20:00:10.340118   21374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0126 20:00:10.349913   21374 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083 for IP: 192.168.49.2
	I0126 20:00:10.350046   21374 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0126 20:00:10.350124   21374 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0126 20:00:10.350173   21374 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/client.key
	I0126 20:00:10.350190   21374 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/client.crt with IP's: []
	I0126 20:00:10.405781   21374 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/client.crt ...
	I0126 20:00:10.405809   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/client.crt: {Name:mk7388b3882cf29f4d1ce29d03ebb3efbfd99bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:10.406258   21374 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/client.key ...
	I0126 20:00:10.406472   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/client.key: {Name:mkcc056fd7de179b842a849e258839affc52d4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:10.406743   21374 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.key.dd3b5fb2
	I0126 20:00:10.406765   21374 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0126 20:00:10.583923   21374 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.crt.dd3b5fb2 ...
	I0126 20:00:10.583939   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.crt.dd3b5fb2: {Name:mkf474efea8f2d3024be943279d00be0beb0aad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:10.584214   21374 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.key.dd3b5fb2 ...
	I0126 20:00:10.584223   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.key.dd3b5fb2: {Name:mk2f5bdd471ba19f326c1e29a35124edfdd6c4df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:10.584399   21374 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.crt
	I0126 20:00:10.584582   21374 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.key
	I0126 20:00:10.584748   21374 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.key
	I0126 20:00:10.584768   21374 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.crt with IP's: []
	I0126 20:00:10.651723   21374 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.crt ...
	I0126 20:00:10.651739   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.crt: {Name:mk65422980f6b3e2986041aecd259616ee2db6bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:10.652010   21374 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.key ...
	I0126 20:00:10.652018   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.key: {Name:mk8dbeeac3d9f7e217c11f869174d993e637639f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:10.652411   21374 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem (1338 bytes)
	W0126 20:00:10.652458   21374 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083_empty.pem, impossibly tiny 0 bytes
	I0126 20:00:10.652476   21374 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1679 bytes)
	I0126 20:00:10.652528   21374 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0126 20:00:10.652567   21374 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0126 20:00:10.652603   21374 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1679 bytes)
	I0126 20:00:10.652675   21374 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem (1708 bytes)
	I0126 20:00:10.653428   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0126 20:00:10.671896   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0126 20:00:10.690530   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0126 20:00:10.708524   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/custom-weave-20220126194339-2083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0126 20:00:10.726655   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0126 20:00:10.744544   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0126 20:00:10.762473   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0126 20:00:10.779253   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0126 20:00:10.796002   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0126 20:00:10.813945   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/2083.pem --> /usr/share/ca-certificates/2083.pem (1338 bytes)
	I0126 20:00:10.830723   21374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/20832.pem --> /usr/share/ca-certificates/20832.pem (1708 bytes)
	I0126 20:00:10.847396   21374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0126 20:00:10.860635   21374 ssh_runner.go:195] Run: openssl version
	I0126 20:00:10.866421   21374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0126 20:00:10.874317   21374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0126 20:00:10.878878   21374 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:43 /usr/share/ca-certificates/minikubeCA.pem
	I0126 20:00:10.878928   21374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0126 20:00:10.884717   21374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0126 20:00:10.892510   21374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2083.pem && ln -fs /usr/share/ca-certificates/2083.pem /etc/ssl/certs/2083.pem"
	I0126 20:00:10.900411   21374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2083.pem
	I0126 20:00:10.904958   21374 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:49 /usr/share/ca-certificates/2083.pem
	I0126 20:00:10.905016   21374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2083.pem
	I0126 20:00:10.910740   21374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2083.pem /etc/ssl/certs/51391683.0"
	I0126 20:00:10.918546   21374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20832.pem && ln -fs /usr/share/ca-certificates/20832.pem /etc/ssl/certs/20832.pem"
	I0126 20:00:10.926528   21374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20832.pem
	I0126 20:00:10.930784   21374 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:49 /usr/share/ca-certificates/20832.pem
	I0126 20:00:10.930843   21374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20832.pem
	I0126 20:00:10.936862   21374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20832.pem /etc/ssl/certs/3ec20f2e.0"
	I0126 20:00:10.944763   21374 kubeadm.go:388] StartCluster: {Name:custom-weave-20220126194339-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:custom-weave-20220126194339-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false}
	I0126 20:00:10.944872   21374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0126 20:00:10.977214   21374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0126 20:00:10.986924   21374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0126 20:00:10.994280   21374 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0126 20:00:10.994337   21374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0126 20:00:11.003983   21374 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0126 20:00:11.004013   21374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0126 20:00:11.511079   21374 out.go:203]   - Generating certificates and keys ...
	I0126 20:00:14.177895   21374 out.go:203]   - Booting up control plane ...
	I0126 20:00:28.705664   21374 out.go:203]   - Configuring RBAC rules ...
	I0126 20:00:29.154119   21374 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0126 20:00:29.197665   21374 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0126 20:00:29.197752   21374 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
	I0126 20:00:29.197808   21374 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0126 20:00:29.203963   21374 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0126 20:00:29.203994   21374 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0126 20:00:29.225230   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0126 20:00:29.955529   21374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0126 20:00:29.955631   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:29.955645   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=df496161bea02a920f5582b36f44351d955cdf25 minikube.k8s.io/name=custom-weave-20220126194339-2083 minikube.k8s.io/updated_at=2022_01_26T20_00_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:29.976551   21374 ops.go:34] apiserver oom_adj: -16
	I0126 20:00:30.063442   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:30.644480   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:31.139333   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:31.639643   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:32.140312   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:32.639444   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:33.139992   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:33.644355   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:34.139319   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:34.639819   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:35.147439   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:35.639964   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:36.142938   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:36.639374   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:37.139474   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:37.641109   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:38.139648   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:38.640063   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:39.145002   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:39.640053   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:40.140035   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:40.643780   21374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0126 20:00:40.695612   21374 kubeadm.go:867] duration metric: took 10.739948584s to wait for elevateKubeSystemPrivileges.
	I0126 20:00:40.695628   21374 kubeadm.go:390] StartCluster complete in 29.750550075s
	I0126 20:00:40.695645   21374 settings.go:142] acquiring lock: {Name:mkb44f1d9eb2a533b4b0cb7d08d08147a57d8376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:40.695729   21374 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 20:00:40.696454   21374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk2720725a2c48b74a1f04b19ffbd0e9d0a29d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 20:00:41.225232   21374 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220126194339-2083" rescaled to 1
	I0126 20:00:41.225267   21374 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0126 20:00:41.225288   21374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0126 20:00:41.253100   21374 out.go:176] * Verifying Kubernetes components...
	I0126 20:00:41.225289   21374 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0126 20:00:41.225459   21374 config.go:176] Loaded profile config "custom-weave-20220126194339-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 20:00:41.253201   21374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0126 20:00:41.253199   21374 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220126194339-2083"
	I0126 20:00:41.253233   21374 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220126194339-2083"
	W0126 20:00:41.253241   21374 addons.go:165] addon storage-provisioner should already be in state true
	I0126 20:00:41.253200   21374 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220126194339-2083"
	I0126 20:00:41.253270   21374 host.go:66] Checking if "custom-weave-20220126194339-2083" exists ...
	I0126 20:00:41.253312   21374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220126194339-2083"
	I0126 20:00:41.253789   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Status}}
	I0126 20:00:41.264402   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:41.290476   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Status}}
	I0126 20:00:41.329104   21374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0126 20:00:41.453959   21374 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220126194339-2083" to be "Ready" ...
	I0126 20:00:41.461746   21374 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220126194339-2083"
	W0126 20:00:41.461764   21374 addons.go:165] addon default-storageclass should already be in state true
	I0126 20:00:41.461786   21374 host.go:66] Checking if "custom-weave-20220126194339-2083" exists ...
	I0126 20:00:41.462225   21374 cli_runner.go:133] Run: docker container inspect custom-weave-20220126194339-2083 --format={{.State.Status}}
	I0126 20:00:41.462266   21374 node_ready.go:49] node "custom-weave-20220126194339-2083" has status "Ready":"True"
	I0126 20:00:41.462279   21374 node_ready.go:38] duration metric: took 8.300532ms waiting for node "custom-weave-20220126194339-2083" to be "Ready" ...
	I0126 20:00:41.462289   21374 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0126 20:00:41.474774   21374 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-zdh52" in "kube-system" namespace to be "Ready" ...
	I0126 20:00:41.526210   21374 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0126 20:00:41.526320   21374 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 20:00:41.526329   21374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0126 20:00:41.526400   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:41.584904   21374 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0126 20:00:41.632414   21374 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0126 20:00:41.632425   21374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0126 20:00:41.632531   21374 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220126194339-2083
	I0126 20:00:41.659853   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:41.764439   21374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0126 20:00:41.779329   21374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51838 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/custom-weave-20220126194339-2083/id_rsa Username:docker}
	I0126 20:00:41.883691   21374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0126 20:00:42.088982   21374 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0126 20:00:42.089002   21374 addons.go:417] enableAddons completed in 863.711354ms
	I0126 20:00:43.499428   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:45.995392   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:47.996264   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:50.499513   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:52.999541   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:55.495195   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:57.496529   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:00:59.997654   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:02.495080   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:04.498024   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:06.995078   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:08.995201   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:11.494856   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:13.495799   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:15.496204   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:17.496307   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:19.496843   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:21.995873   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:24.494811   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:26.496136   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:28.996353   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:31.494872   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:33.497693   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:35.998350   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:38.496465   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:40.997135   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:43.494967   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:45.499898   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:47.995542   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:50.496565   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:52.497344   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:54.498660   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:56.996540   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:01:59.495524   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:01.499269   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:03.995868   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:05.995931   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:07.997010   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:10.497444   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:12.499903   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:14.997775   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:17.496018   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:19.999569   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:22.491540   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:24.500077   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:26.503575   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:28.997838   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:30.998572   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:33.496825   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:35.498178   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:37.498899   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:39.499403   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:41.997617   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:43.998542   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:46.499524   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:48.996949   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:50.997292   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:52.997502   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:54.997594   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:56.999380   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:02:59.499104   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:01.500908   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:03.991039   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:05.993265   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:07.993611   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:09.993770   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:11.998687   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:13.999327   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:16.498343   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:18.499812   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:20.997472   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:23.000017   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:25.000581   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:27.001562   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:29.499350   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:31.499612   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:33.998681   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:35.998825   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:38.498654   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:40.498825   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:42.999511   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:45.498920   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:47.500011   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:49.997702   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:51.999421   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:54.498292   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:56.499281   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:03:58.998304   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:00.998453   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:03.000442   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:05.498110   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:07.499067   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:09.998206   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:12.498943   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:14.998601   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:16.998700   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:18.998764   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:21.499075   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:23.500055   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:25.501225   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:28.000035   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:30.500522   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:32.999907   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:34.999994   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:37.498970   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:39.999357   21374 pod_ready.go:102] pod "coredns-64897985d-zdh52" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:41.504023   21374 pod_ready.go:81] duration metric: took 4m0.0266074s waiting for pod "coredns-64897985d-zdh52" in "kube-system" namespace to be "Ready" ...
	E0126 20:04:41.504039   21374 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0126 20:04:41.504048   21374 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.508103   21374 pod_ready.go:92] pod "etcd-custom-weave-20220126194339-2083" in "kube-system" namespace has status "Ready":"True"
	I0126 20:04:41.508112   21374 pod_ready.go:81] duration metric: took 4.059791ms waiting for pod "etcd-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.508118   21374 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.512132   21374 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220126194339-2083" in "kube-system" namespace has status "Ready":"True"
	I0126 20:04:41.512140   21374 pod_ready.go:81] duration metric: took 4.018333ms waiting for pod "kube-apiserver-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.512162   21374 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.516564   21374 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220126194339-2083" in "kube-system" namespace has status "Ready":"True"
	I0126 20:04:41.516574   21374 pod_ready.go:81] duration metric: took 4.406288ms waiting for pod "kube-controller-manager-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.516581   21374 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-2dhcg" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.896461   21374 pod_ready.go:92] pod "kube-proxy-2dhcg" in "kube-system" namespace has status "Ready":"True"
	I0126 20:04:41.896471   21374 pod_ready.go:81] duration metric: took 379.881961ms waiting for pod "kube-proxy-2dhcg" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:41.896477   21374 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:42.295877   21374 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220126194339-2083" in "kube-system" namespace has status "Ready":"True"
	I0126 20:04:42.295886   21374 pod_ready.go:81] duration metric: took 399.400433ms waiting for pod "kube-scheduler-custom-weave-20220126194339-2083" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:42.295892   21374 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-n67vz" in "kube-system" namespace to be "Ready" ...
	I0126 20:04:44.704853   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:46.711338   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:49.212258   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:51.212931   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:53.213850   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:55.709762   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:04:58.209621   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:00.703345   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:02.703810   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:04.707648   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:07.203534   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:09.210463   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:11.706299   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:13.708830   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:16.205900   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:18.715530   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:21.208960   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:23.713053   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:26.203601   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:28.210955   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:30.713574   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:33.205314   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:35.209617   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:37.705505   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:40.206161   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:42.208181   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:44.704558   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:46.711648   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:49.204415   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:51.208780   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:53.704352   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:55.706268   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:05:57.712407   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:00.207019   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:02.704809   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:05.205361   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:07.704657   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:09.711529   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:11.712164   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:14.206109   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:16.208437   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:18.711936   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:21.206282   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:23.713619   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:26.204492   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:28.207275   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:30.705641   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:33.212739   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:35.706760   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:37.708713   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:39.712058   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:42.205157   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:44.206147   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:46.706500   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:49.207018   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:51.207812   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:53.704369   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:55.706604   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:06:58.207572   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:00.707386   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:03.209798   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:05.711012   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:08.208479   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:10.707966   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:12.712123   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:15.212165   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:17.707894   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:19.709289   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:22.208813   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:24.705134   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:26.706656   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:29.206475   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:31.707474   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:34.209466   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:36.710180   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:39.207400   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:41.208708   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:43.707033   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:46.208034   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:48.705597   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:51.204926   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:53.206802   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:55.217466   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:07:57.712041   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:00.205335   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:02.205991   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:04.211247   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:06.712388   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:09.206403   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:11.208710   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:13.714131   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:16.205696   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:18.205762   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:20.206662   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:22.207461   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:24.212735   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:26.706757   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:28.706975   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:31.208008   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:33.707501   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:36.208304   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:38.709142   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:41.214610   21374 pod_ready.go:102] pod "weave-net-n67vz" in "kube-system" namespace has status "Ready":"False"
	I0126 20:08:42.712367   21374 pod_ready.go:81] duration metric: took 4m0.41385278s waiting for pod "weave-net-n67vz" in "kube-system" namespace to be "Ready" ...
	E0126 20:08:42.712377   21374 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0126 20:08:42.712381   21374 pod_ready.go:38] duration metric: took 8m1.244847236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0126 20:08:42.712401   21374 api_server.go:51] waiting for apiserver process to appear ...
	I0126 20:08:42.739460   21374 out.go:176] 
	W0126 20:08:42.739622   21374 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0126 20:08:42.739711   21374 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0126 20:08:42.739724   21374 out.go:241] * Related issues:
	* Related issues:
	W0126 20:08:42.739775   21374 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0126 20:08:42.739828   21374 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0126 20:08:42.781933   21374 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (551.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (338.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158719754s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:04:52.954694    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 20:04:54.757906    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151881765s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:05:12.786803    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13180932s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:05:22.504033    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:05:28.047253    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149806218s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158888755s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130482555s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:06:39.708134    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:39.713301    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:39.723409    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:39.743563    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:39.786512    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:39.866862    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:40.027752    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:40.349697    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:40.990912    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150434477s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:06:42.271099    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:44.831713    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:06:46.987029    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:06:49.954947    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:07:00.199139    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154033257s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:07:20.684352    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158405436s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:07:54.408468    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 20:08:01.651493    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150722364s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:08:10.100643    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139105873s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:10:12.788528    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context calico-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136097817s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/calico/DNS (338.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (279.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138477215s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154469449s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:12:54.416582    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137682235s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135797225s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:13:31.172037    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147389542s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136922495s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129827748s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:14:16.924007    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:16.930315    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:16.942128    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:16.962308    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:17.002492    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:17.082572    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:17.242814    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:17.570599    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:18.210743    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:19.492122    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:14:22.053144    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:14:27.174334    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147455434s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:14:37.419582    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:14:52.959026    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 20:14:54.764718    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14881844s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:15:38.861722    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130670319s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220126194239-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130631342s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (279.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (353.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:15:12.799351    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140055189s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:15:28.053545    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136825913s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14842297s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131044268s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:16:17.876165    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150441334s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:16:39.719840    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:16:46.991528    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130567982s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:17:00.787730    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129033815s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125017064s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:17:54.419585    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131850113s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.19653617s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:19:44.630791    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:19:52.966394    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151229262s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154613996s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (353.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (334.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:19:16.931293    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139826698s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14908422s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134488406s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:19:54.772390    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:20:12.802329    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128994115s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:20:28.061601    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151110026s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127140201s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:20:57.550782    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12527045s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144038096s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:21:39.725426    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:21:47.003860    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 20:21:49.532720    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:49.540402    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:49.552022    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:49.576114    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:49.618036    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:49.698193    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:49.858918    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:50.180815    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:50.823338    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
E0126 20:21:52.109008    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:54.672722    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:21:59.794050    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158362829s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 20:22:10.037912    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
E0126 20:22:30.522102    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: signal: killed (2.477954695s)
E0126 20:22:54.427826    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 20:23:02.807667    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:23:11.482921    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (966ns)
E0126 20:24:16.936403    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory
E0126 20:24:33.408040    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/kindnet-20220126194239-2083/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220126194238-2083 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.204µs)
net_test.go:169: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:174: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (334.33s)

                                                
                                    

Test pass (240/275)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.99
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.23.2/json-events 19.71
14 TestDownloadOnly/v1.23.2/kubectl 0
15 TestDownloadOnly/v1.23.2/LogsDuration 0.28
17 TestDownloadOnly/v1.23.3-rc.0/json-events 10.93
21 TestDownloadOnly/v1.23.3-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.3-rc.0/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.12
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.63
26 TestBinaryMirror 1.87
27 TestOffline 121.36
29 TestAddons/Setup 159.11
33 TestAddons/parallel/MetricsServer 5.78
34 TestAddons/parallel/HelmTiller 11.27
36 TestAddons/parallel/CSI 44.09
38 TestAddons/serial/GCPAuth 17.34
39 TestAddons/StoppedEnableDisable 18.39
40 TestCertOptions 103.24
41 TestCertExpiration 246.08
43 TestForceSystemdFlag 87.27
44 TestForceSystemdEnv 80.5
46 TestHyperKitDriverInstallOrUpdate 8.84
49 TestErrorSpam/setup 75.32
50 TestErrorSpam/start 2.35
51 TestErrorSpam/status 1.94
52 TestErrorSpam/pause 2.16
53 TestErrorSpam/unpause 2.19
54 TestErrorSpam/stop 18.46
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 129.57
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 7.43
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 1.74
65 TestFunctional/serial/CacheCmd/cache/add_remote 7.79
66 TestFunctional/serial/CacheCmd/cache/add_local 2.11
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.07
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.73
71 TestFunctional/serial/CacheCmd/cache/delete 0.14
72 TestFunctional/serial/MinikubeKubectlCmd 0.46
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.55
74 TestFunctional/serial/ExtraConfig 62.91
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 2.51
79 TestFunctional/parallel/ConfigCmd 0.4
80 TestFunctional/parallel/DashboardCmd 3.09
81 TestFunctional/parallel/DryRun 1.49
82 TestFunctional/parallel/InternationalLanguage 0.64
83 TestFunctional/parallel/StatusCmd 2.11
87 TestFunctional/parallel/AddonsCmd 0.29
88 TestFunctional/parallel/PersistentVolumeClaim 27.89
90 TestFunctional/parallel/SSHCmd 1.27
91 TestFunctional/parallel/CpCmd 2.63
92 TestFunctional/parallel/MySQL 20.79
93 TestFunctional/parallel/FileSync 0.72
94 TestFunctional/parallel/CertSync 4.11
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0
102 TestFunctional/parallel/Version/short 0.1
103 TestFunctional/parallel/Version/components 1.33
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.47
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.44
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
107 TestFunctional/parallel/ImageCommands/ImageListYaml 0.5
108 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
109 TestFunctional/parallel/ImageCommands/Setup 2.24
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.76
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.47
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.98
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.51
114 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.83
115 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.96
116 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.04
117 TestFunctional/parallel/ImageCommands/ImageRemove 0.96
118 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.19
119 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.83
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.86
121 TestFunctional/parallel/ProfileCmd/profile_list 0.75
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.83
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 4.12
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
133 TestFunctional/parallel/MountCmd/any-port 9.76
134 TestFunctional/parallel/MountCmd/specific-port 3.47
135 TestFunctional/delete_addon-resizer_images 0.27
136 TestFunctional/delete_my-image_image 0.12
137 TestFunctional/delete_minikube_cached_images 0.12
140 TestIngressAddonLegacy/StartLegacyK8sCluster 138.39
142 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.43
143 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
147 TestJSONOutput/start/Command 127.57
148 TestJSONOutput/start/Audit 0
150 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/pause/Command 0.86
154 TestJSONOutput/pause/Audit 0
156 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/unpause/Command 0.84
160 TestJSONOutput/unpause/Audit 0
162 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/stop/Command 18.09
166 TestJSONOutput/stop/Audit 0
168 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
170 TestErrorJSONOutput 0.78
172 TestKicCustomNetwork/create_custom_network 92.84
173 TestKicCustomNetwork/use_default_bridge_network 78.1
174 TestKicExistingNetwork 93.09
175 TestMainNoArgs 0.07
178 TestMountStart/serial/StartWithMountFirst 49.31
179 TestMountStart/serial/VerifyMountFirst 0.61
180 TestMountStart/serial/StartWithMountSecond 49.31
181 TestMountStart/serial/VerifyMountSecond 0.61
182 TestMountStart/serial/DeleteFirst 12.58
183 TestMountStart/serial/VerifyMountPostDelete 0.61
184 TestMountStart/serial/Stop 7.79
185 TestMountStart/serial/RestartStopped 31.3
186 TestMountStart/serial/VerifyMountPostStop 0.61
189 TestMultiNode/serial/FreshStart2Nodes 237.74
190 TestMultiNode/serial/DeployApp2Nodes 6.23
191 TestMultiNode/serial/PingHostFrom2Pods 0.84
192 TestMultiNode/serial/AddNode 120.05
193 TestMultiNode/serial/ProfileList 0.69
194 TestMultiNode/serial/CopyFile 22.87
195 TestMultiNode/serial/StopNode 12.45
196 TestMultiNode/serial/StartAfterStop 51.85
197 TestMultiNode/serial/RestartKeepsNodes 264.04
198 TestMultiNode/serial/DeleteNode 17.6
199 TestMultiNode/serial/StopMultiNode 25.84
200 TestMultiNode/serial/RestartMultiNode 151.29
201 TestMultiNode/serial/ValidateNameConflict 104.79
205 TestPreload 236.8
207 TestScheduledStopUnix 156.11
210 TestInsufficientStorage 66.62
211 TestRunningBinaryUpgrade 191.08
213 TestKubernetesUpgrade 219.48
214 TestMissingContainerUpgrade 169.61
216 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
217 TestNoKubernetes/serial/StartWithK8s 67.14
218 TestNoKubernetes/serial/StartWithStopK8s 30.66
219 TestNoKubernetes/serial/Start 40.02
220 TestNoKubernetes/serial/VerifyK8sNotRunning 0.77
221 TestNoKubernetes/serial/ProfileList 2.33
222 TestNoKubernetes/serial/Stop 1.95
223 TestNoKubernetes/serial/StartNoArgs 13.65
224 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.63
225 TestStoppedBinaryUpgrade/Setup 1.33
226 TestStoppedBinaryUpgrade/Upgrade 151.03
227 TestStoppedBinaryUpgrade/MinikubeLogs 2.27
236 TestPause/serial/Start 117.23
237 TestPause/serial/SecondStartNoReconfiguration 7.63
238 TestPause/serial/Pause 0.85
239 TestPause/serial/VerifyStatus 0.64
240 TestPause/serial/Unpause 0.83
241 TestPause/serial/PauseAgain 0.89
242 TestPause/serial/DeletePaused 17.8
243 TestPause/serial/VerifyDeletedResources 3.63
252 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 11.06
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 14.75
258 TestStartStop/group/old-k8s-version/serial/FirstStart 163.61
260 TestStartStop/group/no-preload/serial/FirstStart 135.94
261 TestStartStop/group/old-k8s-version/serial/DeployApp 11.17
262 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.77
263 TestStartStop/group/old-k8s-version/serial/Stop 18.38
264 TestStartStop/group/no-preload/serial/DeployApp 11.15
265 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.45
266 TestStartStop/group/old-k8s-version/serial/SecondStart 123.78
267 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
268 TestStartStop/group/no-preload/serial/Stop 12.5
269 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.41
270 TestStartStop/group/no-preload/serial/SecondStart 103.52
271 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
272 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.9
273 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
274 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 7.17
275 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.67
276 TestStartStop/group/no-preload/serial/Pause 5.05
277 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.78
278 TestStartStop/group/old-k8s-version/serial/Pause 4.38
280 TestStartStop/group/embed-certs/serial/FirstStart 69.88
282 TestStartStop/group/default-k8s-different-port/serial/FirstStart 113.33
283 TestStartStop/group/embed-certs/serial/DeployApp 11.06
284 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
285 TestStartStop/group/embed-certs/serial/Stop 16.77
286 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.4
287 TestStartStop/group/embed-certs/serial/SecondStart 103.12
288 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.03
289 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.75
290 TestStartStop/group/default-k8s-different-port/serial/Stop 14.6
291 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.52
292 TestStartStop/group/default-k8s-different-port/serial/SecondStart 91.43
293 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
294 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.89
295 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.67
296 TestStartStop/group/embed-certs/serial/Pause 4.57
298 TestStartStop/group/newest-cni/serial/FirstStart 68.99
299 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
300 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 7.12
301 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.66
302 TestStartStop/group/default-k8s-different-port/serial/Pause 4.47
303 TestNetworkPlugins/group/auto/Start 114.21
304 TestStartStop/group/newest-cni/serial/DeployApp 0
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.89
306 TestStartStop/group/newest-cni/serial/Stop 18.08
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.39
308 TestStartStop/group/newest-cni/serial/SecondStart 51.81
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.68
312 TestStartStop/group/newest-cni/serial/Pause 4.89
315 TestNetworkPlugins/group/cilium/Start 124.58
316 TestNetworkPlugins/group/cilium/ControllerPod 5.03
317 TestNetworkPlugins/group/cilium/KubeletFlags 0.67
318 TestNetworkPlugins/group/cilium/NetCatPod 14.54
319 TestNetworkPlugins/group/cilium/DNS 0.16
320 TestNetworkPlugins/group/cilium/Localhost 0.14
321 TestNetworkPlugins/group/cilium/HairPin 0.14
322 TestNetworkPlugins/group/calico/Start 122.68
323 TestNetworkPlugins/group/calico/ControllerPod 5.02
324 TestNetworkPlugins/group/calico/KubeletFlags 0.66
325 TestNetworkPlugins/group/calico/NetCatPod 12.09
327 TestNetworkPlugins/group/enable-default-cni/Start 348.6
328 TestNetworkPlugins/group/kindnet/Start 81.52
329 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
330 TestNetworkPlugins/group/kindnet/KubeletFlags 0.65
331 TestNetworkPlugins/group/kindnet/NetCatPod 12.99
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.65
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16
336 TestNetworkPlugins/group/bridge/Start 101.98
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.72
338 TestNetworkPlugins/group/bridge/NetCatPod 15.91
x
+
TestDownloadOnly/v1.16.0/json-events (18.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220126184146-2083 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime= --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220126184146-2083 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime= --driver=docker : (18.986200678s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
--- PASS: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
--- PASS: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220126184146-2083
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220126184146-2083: exit status 85 (277.275386ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/26 18:41:46
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0126 18:41:46.281346    2084 out.go:297] Setting OutFile to fd 1 ...
	I0126 18:41:46.281475    2084 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:41:46.281480    2084 out.go:310] Setting ErrFile to fd 2...
	I0126 18:41:46.281483    2084 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:41:46.281553    2084 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	W0126 18:41:46.281640    2084 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: no such file or directory
	I0126 18:41:46.282102    2084 out.go:304] Setting JSON to true
	I0126 18:41:46.309383    2084 start.go:112] hostinfo: {"hostname":"37309.local","uptime":681,"bootTime":1643250625,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 18:41:46.309478    2084 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 18:41:46.335576    2084 notify.go:174] Checking for updates...
	W0126 18:41:46.335678    2084 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball: no such file or directory
	I0126 18:41:46.362158    2084 driver.go:344] Setting default libvirt URI to qemu:///system
	W0126 18:41:46.448825    2084 docker.go:108] docker version returned error: exit status 1
	I0126 18:41:46.475333    2084 start.go:281] selected driver: docker
	I0126 18:41:46.475350    2084 start.go:798] validating driver "docker" against <nil>
	I0126 18:41:46.475502    2084 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:41:46.642571    2084 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:41:46.695203    2084 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:41:46.861298    2084 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:41:46.888232    2084 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0126 18:41:46.941682    2084 start_flags.go:369] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0126 18:41:46.941784    2084 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0126 18:41:46.941797    2084 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0126 18:41:46.941822    2084 cni.go:93] Creating CNI manager for ""
	I0126 18:41:46.941829    2084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0126 18:41:46.941839    2084 start_flags.go:302] config:
	{Name:download-only-20220126184146-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220126184146-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:41:46.967883    2084 cache.go:120] Beginning downloading kic base image for docker with docker
	I0126 18:41:46.993919    2084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0126 18:41:46.993926    2084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0126 18:41:46.994730    2084 cache.go:107] acquiring lock: {Name:mk7ce660627c49ce1cea687920910eb36bd32cb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.994776    2084 cache.go:107] acquiring lock: {Name:mk6a246b6821c6ac28ea9a391824ca7a46c41df7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995067    2084 cache.go:107] acquiring lock: {Name:mk44d96383a3184312fedc39aef83c85fbba3017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995299    2084 cache.go:107] acquiring lock: {Name:mkb20bbbaad13efeaeddd4f0ed71114e5122da39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995463    2084 cache.go:107] acquiring lock: {Name:mk0c333e346d6cdf8d62b689bdc84a5f93358184 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995465    2084 cache.go:107] acquiring lock: {Name:mk1256525743b8e75ecd8ea65c84cdf3e47ec28f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995488    2084 cache.go:107] acquiring lock: {Name:mk25b06416913ba9331ec086c91924a66fccaa94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995938    2084 cache.go:107] acquiring lock: {Name:mk66364fcc86da5d4b96923059ff9d631be9dd8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.996004    2084 cache.go:107] acquiring lock: {Name:mkf6dd3a3e32f8ed7da2995f9b91bf7801155903 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.996025    2084 cache.go:107] acquiring lock: {Name:mk24167661eed9aa530042b666142237ec8b0c9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0126 18:41:46.995912    2084 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/download-only-20220126184146-2083/config.json ...
	I0126 18:41:46.996143    2084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/download-only-20220126184146-2083/config.json: {Name:mk3233c8b2252d19fbdd6b44d4c80b8592b46284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0126 18:41:46.996158    2084 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0126 18:41:46.996169    2084 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0126 18:41:46.996174    2084 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0126 18:41:46.996174    2084 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0126 18:41:46.996231    2084 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0126 18:41:46.996338    2084 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0126 18:41:46.996338    2084 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0126 18:41:46.996447    2084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0126 18:41:46.996497    2084 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0126 18:41:46.996498    2084 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0126 18:41:46.996668    2084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0126 18:41:46.997052    2084 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.16.0/kubectl
	I0126 18:41:46.997052    2084 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.16.0/kubeadm
	I0126 18:41:46.997057    2084 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.16.0/kubelet
	I0126 18:41:46.998700    2084 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:46.998824    2084 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:46.999156    2084 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:46.999751    2084 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.001019    2084 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.001115    2084 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.001195    2084 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.001204    2084 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.001119    2084 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.001454    2084 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0126 18:41:47.107245    2084 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
	I0126 18:41:47.107402    2084 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
	I0126 18:41:47.107483    2084 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
	I0126 18:41:47.879529    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0126 18:41:48.159602    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I0126 18:41:48.174193    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I0126 18:41:48.406346    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I0126 18:41:48.468833    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
	I0126 18:41:48.468840    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
	I0126 18:41:48.484753    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
	I0126 18:41:48.539229    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0126 18:41:48.645059    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0126 18:41:48.645078    2084 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 1.64998086s
	I0126 18:41:48.645092    2084 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0126 18:41:48.671530    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
	I0126 18:41:48.724798    2084 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
	I0126 18:41:48.978907    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0126 18:41:48.978928    2084 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.983851544s
	I0126 18:41:48.978945    2084 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0126 18:41:49.225084    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0126 18:41:49.225100    2084 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 2.230208896s
	I0126 18:41:49.225113    2084 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0126 18:41:50.310452    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0126 18:41:50.310474    2084 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 3.315386026s
	I0126 18:41:50.310483    2084 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0126 18:41:51.021504    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
	I0126 18:41:51.021530    2084 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2" took 4.026355175s
	I0126 18:41:51.021539    2084 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
	I0126 18:41:51.635932    2084 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/darwin/v1.16.0/kubectl
	I0126 18:41:52.201950    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0126 18:41:52.201969    2084 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0" took 5.207115638s
	I0126 18:41:52.201977    2084 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0126 18:41:52.290780    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0126 18:41:52.290797    2084 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0" took 5.29558141s
	I0126 18:41:52.290807    2084 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0126 18:41:52.998075    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0126 18:41:52.998094    2084 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0" took 6.00399061s
	I0126 18:41:52.998102    2084 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0126 18:41:53.079882    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0126 18:41:53.079898    2084 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0" took 6.085298807s
	I0126 18:41:53.079906    2084 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0126 18:41:53.810109    2084 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
	I0126 18:41:53.810129    2084 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0" took 6.815033438s
	I0126 18:41:53.810137    2084 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0126 18:41:53.810150    2084 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220126184146-2083"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/json-events (19.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220126184146-2083 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime= --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220126184146-2083 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime= --driver=docker : (19.706456076s)
--- PASS: TestDownloadOnly/v1.23.2/json-events (19.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/kubectl
--- PASS: TestDownloadOnly/v1.23.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220126184146-2083
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220126184146-2083: exit status 85 (281.390131ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/26 18:42:05
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0126 18:42:05.770628    2141 out.go:297] Setting OutFile to fd 1 ...
	I0126 18:42:05.770755    2141 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:42:05.770760    2141 out.go:310] Setting ErrFile to fd 2...
	I0126 18:42:05.770763    2141 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:42:05.770839    2141 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	W0126 18:42:05.770922    2141 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: no such file or directory
	I0126 18:42:05.771077    2141 out.go:304] Setting JSON to true
	I0126 18:42:05.795062    2141 start.go:112] hostinfo: {"hostname":"37309.local","uptime":700,"bootTime":1643250625,"procs":327,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 18:42:05.795175    2141 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 18:42:05.822198    2141 notify.go:174] Checking for updates...
	W0126 18:42:05.822206    2141 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball: no such file or directory
	I0126 18:42:05.848225    2141 config.go:176] Loaded profile config "download-only-20220126184146-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0126 18:42:05.848285    2141 start.go:706] api.Load failed for download-only-20220126184146-2083: filestore "download-only-20220126184146-2083": Docker machine "download-only-20220126184146-2083" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0126 18:42:05.848348    2141 driver.go:344] Setting default libvirt URI to qemu:///system
	W0126 18:42:05.848381    2141 start.go:706] api.Load failed for download-only-20220126184146-2083: filestore "download-only-20220126184146-2083": Docker machine "download-only-20220126184146-2083" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0126 18:42:11.851313    2141 docker.go:108] docker version returned error: deadline exceeded running "docker version --format {{.Server.Os}}-{{.Server.Version}}": signal: killed
	I0126 18:42:11.878277    2141 start.go:281] selected driver: docker
	I0126 18:42:11.878297    2141 start.go:798] validating driver "docker" against &{Name:download-only-20220126184146-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220126184146-2083 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:42:11.878630    2141 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:42:14.426040    2141 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.547391866s)
	I0126 18:42:14.426384    2141 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2022-01-27 02:42:14.363612009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:42:14.426635    2141 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:42:14.601090    2141 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2022-01-27 02:42:14.54107983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAd
dress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secco
mp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:42:14.603069    2141 cni.go:93] Creating CNI manager for ""
	I0126 18:42:14.603089    2141 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0126 18:42:14.603104    2141 start_flags.go:302] config:
	{Name:download-only-20220126184146-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:download-only-20220126184146-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:42:14.629983    2141 cache.go:120] Beginning downloading kic base image for docker with docker
	I0126 18:42:14.655898    2141 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 18:42:14.656076    2141 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0126 18:42:14.750230    2141 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0126 18:42:14.750255    2141 cache.go:57] Caching tarball of preloaded images
	I0126 18:42:14.750448    2141 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0126 18:42:14.776631    2141 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 ...
	I0126 18:42:14.786990    2141 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0126 18:42:14.787007    2141 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0126 18:42:14.897658    2141 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4?checksum=md5:6fa926c88a747ae43bb3bda5a3741fe2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220126184146-2083"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.2/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/json-events (10.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220126184146-2083 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime= --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220126184146-2083 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime= --driver=docker : (10.934245639s)
--- PASS: TestDownloadOnly/v1.23.3-rc.0/json-events (10.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.3-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220126184146-2083
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220126184146-2083: exit status 85 (277.585938ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/26 18:42:26
	Running on machine: 37309
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0126 18:42:26.067136    2186 out.go:297] Setting OutFile to fd 1 ...
	I0126 18:42:26.067280    2186 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:42:26.067285    2186 out.go:310] Setting ErrFile to fd 2...
	I0126 18:42:26.067289    2186 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:42:26.067366    2186 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	W0126 18:42:26.067452    2186 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: no such file or directory
	I0126 18:42:26.067599    2186 out.go:304] Setting JSON to true
	I0126 18:42:26.092663    2186 start.go:112] hostinfo: {"hostname":"37309.local","uptime":721,"bootTime":1643250625,"procs":331,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 18:42:26.092769    2186 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 18:42:26.120060    2186 notify.go:174] Checking for updates...
	I0126 18:42:26.146658    2186 config.go:176] Loaded profile config "download-only-20220126184146-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	W0126 18:42:26.146728    2186 start.go:706] api.Load failed for download-only-20220126184146-2083: filestore "download-only-20220126184146-2083": Docker machine "download-only-20220126184146-2083" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0126 18:42:26.146776    2186 driver.go:344] Setting default libvirt URI to qemu:///system
	W0126 18:42:26.146802    2186 start.go:706] api.Load failed for download-only-20220126184146-2083: filestore "download-only-20220126184146-2083": Docker machine "download-only-20220126184146-2083" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0126 18:42:26.271088    2186 docker.go:132] docker version: linux-20.10.6
	I0126 18:42:26.271212    2186 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:42:26.442397    2186 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2022-01-27 02:42:26.393501836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:42:26.469160    2186 start.go:281] selected driver: docker
	I0126 18:42:26.469194    2186 start.go:798] validating driver "docker" against &{Name:download-only-20220126184146-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:download-only-20220126184146-2083 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:42:26.469540    2186 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:42:26.642400    2186 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2022-01-27 02:42:26.591141855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:42:26.644400    2186 cni.go:93] Creating CNI manager for ""
	I0126 18:42:26.644419    2186 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0126 18:42:26.644434    2186 start_flags.go:302] config:
	{Name:download-only-20220126184146-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3-rc.0 ClusterName:download-only-20220126184146-2083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:42:26.671515    2186 cache.go:120] Beginning downloading kic base image for docker with docker
	I0126 18:42:26.697992    2186 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0126 18:42:26.697994    2186 preload.go:132] Checking if preload exists for k8s version v1.23.3-rc.0 and runtime docker
	I0126 18:42:26.782149    2186 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3-rc.0/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4
	I0126 18:42:26.782181    2186 cache.go:57] Caching tarball of preloaded images
	I0126 18:42:26.782389    2186 preload.go:132] Checking if preload exists for k8s version v1.23.3-rc.0 and runtime docker
	I0126 18:42:26.807898    2186 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0126 18:42:26.820480    2186 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0126 18:42:26.820498    2186 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0126 18:42:26.951292    2186 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3-rc.0/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:98a0ed725de43435c7e0fb42aa7ffb00 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4
	I0126 18:42:34.971536    2186 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0126 18:42:34.971685    2186 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0126 18:42:35.753879    2186 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3-rc.0 on docker
	I0126 18:42:35.753969    2186 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/download-only-20220126184146-2083/config.json ...
	I0126 18:42:35.754275    2186 preload.go:132] Checking if preload exists for k8s version v1.23.3-rc.0 and runtime docker
	I0126 18:42:35.754484    2186 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.3-rc.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.3-rc.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/darwin/v1.23.3-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220126184146-2083"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:193: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.11854182s)
--- PASS: TestDownloadOnly/DeleteAll (1.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220126184146-2083
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.63s)

                                                
                                    
x
+
TestBinaryMirror (1.87s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220126184246-2083 --alsologtostderr --binary-mirror http://127.0.0.1:49817 --driver=docker 
helpers_test.go:176: Cleaning up "binary-mirror-20220126184246-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220126184246-2083
--- PASS: TestBinaryMirror (1.87s)

                                                
                                    
x
+
TestOffline (121.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline--20220126193232-2083 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-darwin-amd64 start -p offline--20220126193232-2083 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m49.040482404s)
helpers_test.go:176: Cleaning up "offline--20220126193232-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline--20220126193232-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p offline--20220126193232-2083: (12.324191761s)
--- PASS: TestOffline (121.36s)

                                                
                                    
x
+
TestAddons/Setup (159.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220126184248-2083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220126184248-2083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m39.108081003s)
--- PASS: TestAddons/Setup (159.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 2.395676ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-6b76bd68b6-s2g4j" [221da72e-6e89-4bb9-9228-a6b1c8bbf457] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009756161s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220126184248-2083 top pods -n kube-system
addons_test.go:383: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 13.808357ms
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-mjbmw" [00d01db2-873b-495b-a2ee-7484890bd0bf] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01157302s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220126184248-2083 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220126184248-2083 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.651710825s)
addons_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 4.965192ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220126184248-2083 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220126184248-2083 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220126184248-2083 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [fa514974-6187-4cb4-bde6-74a63b00a77a] Pending
helpers_test.go:343: "task-pv-pod" [fa514974-6187-4cb4-bde6-74a63b00a77a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [fa514974-6187-4cb4-bde6-74a63b00a77a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.015273282s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220126184248-2083 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220126184248-2083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220126184248-2083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220126184248-2083 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220126184248-2083 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220126184248-2083 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220126184248-2083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220126184248-2083 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [163d2405-fb11-49a4-bf96-27b585a989d2] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [163d2405-fb11-49a4-bf96-27b585a989d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [163d2405-fb11-49a4-bf96-27b585a989d2] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.014694323s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220126184248-2083 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220126184248-2083 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220126184248-2083 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.914578119s)
addons_test.go:593: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (17.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220126184248-2083 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [8e8ee13f-f97d-4ee3-89de-99c752c347eb] Pending
helpers_test.go:343: "busybox" [8e8ee13f-f97d-4ee3-89de-99c752c347eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [8e8ee13f-f97d-4ee3-89de-99c752c347eb] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.012713674s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220126184248-2083 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20220126184248-2083 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220126184248-2083 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220126184248-2083 addons disable gcp-auth --alsologtostderr -v=1: (6.725095642s)
--- PASS: TestAddons/serial/GCPAuth (17.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220126184248-2083
addons_test.go:133: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220126184248-2083: (17.955262194s)
addons_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220126184248-2083
addons_test.go:141: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220126184248-2083
--- PASS: TestAddons/StoppedEnableDisable (18.39s)

                                                
                                    
x
+
TestCertOptions (103.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220126194524-2083 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0126 19:45:27.994559    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:46:46.926112    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
cert_options_test.go:50: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220126194524-2083 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (1m25.165674744s)
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220126194524-2083 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220126194524-2083 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220126194524-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220126194524-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220126194524-2083: (16.679293811s)
--- PASS: TestCertOptions (103.24s)

                                                
                                    
x
+
TestCertExpiration (246.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220126194348-2083 --memory=2048 --cert-expiration=3m --driver=docker 
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current4073024338/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current4073024338/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current4073024338/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220126194348-2083 --memory=2048 --cert-expiration=3m --driver=docker : (53.405163621s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220126194348-2083 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220126194348-2083 --memory=2048 --cert-expiration=8760h --driver=docker : (7.040291219s)
helpers_test.go:176: Cleaning up "cert-expiration-20220126194348-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220126194348-2083
E0126 19:47:54.361073    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220126194348-2083: (5.635924298s)
--- PASS: TestCertExpiration (246.08s)

                                                
                                    
x
+
TestForceSystemdFlag (87.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220126194356-2083 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220126194356-2083 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m10.891947405s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20220126194356-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220126194356-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220126194356-2083: (16.382382917s)
--- PASS: TestForceSystemdFlag (87.27s)

                                                
                                    
x
+
TestForceSystemdEnv (80.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220126194210-2083 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220126194210-2083 --memory=2048 --alsologtostderr -v=5 --driver=docker : (1m5.000385918s)
helpers_test.go:176: Cleaning up "force-systemd-env-20220126194210-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220126194210-2083

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220126194210-2083: (15.500747735s)
--- PASS: TestForceSystemdEnv (80.50s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.84s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperKitDriverInstallOrUpdate (8.84s)

                                                
                                    
x
+
TestErrorSpam/setup (75.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220126184712-2083 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 --driver=docker 
error_spam_test.go:79: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220126184712-2083 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 --driver=docker : (1m15.318611914s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2."
--- PASS: TestErrorSpam/setup (75.32s)

                                                
                                    
x
+
TestErrorSpam/start (2.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 start --dry-run
--- PASS: TestErrorSpam/start (2.35s)

                                                
                                    
x
+
TestErrorSpam/status (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 status
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 status
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 status
--- PASS: TestErrorSpam/status (1.94s)

                                                
                                    
x
+
TestErrorSpam/pause (2.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 pause
--- PASS: TestErrorSpam/pause (2.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 unpause
--- PASS: TestErrorSpam/unpause (2.19s)

                                                
                                    
x
+
TestErrorSpam/stop (18.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 stop
error_spam_test.go:157: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 stop: (17.722204797s)
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220126184712-2083 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220126184712-2083 stop
--- PASS: TestErrorSpam/stop (18.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1707: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/test/nested/copy/2083/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (129.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2089: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E0126 18:50:27.954658    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:27.960784    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:27.971464    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:27.991562    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:28.037843    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:28.118017    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:28.283266    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:28.603616    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:29.254036    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:30.536781    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:33.107650    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:38.230210    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:50:48.476293    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:51:08.968200    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
functional_test.go:2089: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m9.567201655s)
--- PASS: TestFunctional/serial/StartWithProxy (129.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --alsologtostderr -v=8: (7.428342468s)
functional_test.go:659: soft start took 7.428967481s for "functional-20220126184901-2083" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220126184901-2083 get po -A
functional_test.go:692: (dbg) Done: kubectl --context functional-20220126184901-2083 get po -A: (1.73846228s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add k8s.gcr.io/pause:3.1: (1.402133981s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add k8s.gcr.io/pause:3.3: (3.221189469s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add k8s.gcr.io/pause:latest: (3.17048845s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220126184901-2083 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20220126184901-20833875365575
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add minikube-local-cache-test:functional-20220126184901-2083
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache add minikube-local-cache-test:functional-20220126184901-2083: (1.495362781s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cache delete minikube-local-cache-test:functional-20220126184901-2083
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220126184901-2083
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 kubectl -- --context functional-20220126184901-2083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220126184901-2083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.55s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0126 18:51:49.933577    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.908669248s)
functional_test.go:757: restart took 1m2.908777799s for "functional-20220126184901-2083" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220126184901-2083 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 logs: (2.513133349s)
--- PASS: TestFunctional/serial/LogsCmd (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 config get cpus: exit status 14 (43.599902ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 config get cpus: exit status 14 (45.046741ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220126184901-2083 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220126184901-2083 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 4944: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (653.209512ms)

                                                
                                                
-- stdout --
	* [functional-20220126184901-2083] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0126 18:53:44.928562    4883 out.go:297] Setting OutFile to fd 1 ...
	I0126 18:53:44.928714    4883 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:53:44.928719    4883 out.go:310] Setting ErrFile to fd 2...
	I0126 18:53:44.928722    4883 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:53:44.928791    4883 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 18:53:44.929039    4883 out.go:304] Setting JSON to false
	I0126 18:53:44.953159    4883 start.go:112] hostinfo: {"hostname":"37309.local","uptime":1399,"bootTime":1643250625,"procs":321,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 18:53:44.953251    4883 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 18:53:44.996771    4883 out.go:176] * [functional-20220126184901-2083] minikube v1.25.1 on Darwin 11.2.3
	I0126 18:53:45.046760    4883 out.go:176]   - MINIKUBE_LOCATION=13251
	I0126 18:53:45.072977    4883 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 18:53:45.098917    4883 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0126 18:53:45.124934    4883 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0126 18:53:45.150923    4883 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0126 18:53:45.151422    4883 config.go:176] Loaded profile config "functional-20220126184901-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 18:53:45.151754    4883 driver.go:344] Setting default libvirt URI to qemu:///system
	I0126 18:53:45.252174    4883 docker.go:132] docker version: linux-20.10.6
	I0126 18:53:45.252310    4883 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:53:45.437947    4883 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-27 02:53:45.376146887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:53:45.464016    4883 out.go:176] * Using the docker driver based on existing profile
	I0126 18:53:45.464057    4883 start.go:281] selected driver: docker
	I0126 18:53:45.464066    4883 start.go:798] validating driver "docker" against &{Name:functional-20220126184901-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220126184901-2083 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false reg
istry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:53:45.464217    4883 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0126 18:53:45.493514    4883 out.go:176] 
	W0126 18:53:45.493702    4883 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0126 18:53:45.519767    4883 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220126184901-2083 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (636.335451ms)

                                                
                                                
-- stdout --
	* [functional-20220126184901-2083] minikube v1.25.1 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0126 18:53:40.369137    4721 out.go:297] Setting OutFile to fd 1 ...
	I0126 18:53:40.369262    4721 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:53:40.369267    4721 out.go:310] Setting ErrFile to fd 2...
	I0126 18:53:40.369270    4721 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 18:53:40.369381    4721 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 18:53:40.369643    4721 out.go:304] Setting JSON to false
	I0126 18:53:40.394073    4721 start.go:112] hostinfo: {"hostname":"37309.local","uptime":1395,"bootTime":1643250625,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0126 18:53:40.394164    4721 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0126 18:53:40.421176    4721 out.go:176] * [functional-20220126184901-2083] minikube v1.25.1 sur Darwin 11.2.3
	I0126 18:53:40.467773    4721 out.go:176]   - MINIKUBE_LOCATION=13251
	I0126 18:53:40.493771    4721 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0126 18:53:40.519609    4721 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0126 18:53:40.545848    4721 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0126 18:53:40.571818    4721 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0126 18:53:40.572207    4721 config.go:176] Loaded profile config "functional-20220126184901-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 18:53:40.572549    4721 driver.go:344] Setting default libvirt URI to qemu:///system
	I0126 18:53:40.673318    4721 docker.go:132] docker version: linux-20.10.6
	I0126 18:53:40.673439    4721 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0126 18:53:40.860950    4721 info.go:263] docker info: {ID:ZJ7X:TVJ7:HUNO:HUBE:4PQK:VNQQ:C42R:OA4J:2HUY:BLPP:3M27:3QAI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-27 02:53:40.793034941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0126 18:53:40.887678    4721 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0126 18:53:40.887740    4721 start.go:281] selected driver: docker
	I0126 18:53:40.887747    4721 start.go:798] validating driver "docker" against &{Name:functional-20220126184901-2083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220126184901-2083 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false reg
istry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0126 18:53:40.887816    4721 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0126 18:53:40.915455    4721 out.go:176] 
	W0126 18:53:40.915556    4721 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0126 18:53:40.941438    4721 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:873: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1541: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 addons list
functional_test.go:1553: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [d8037208-7cdc-4486-b7db-60f8cdf8c429] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011033037s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220126184901-2083 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220126184901-2083 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220126184901-2083 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220126184901-2083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [9e1abf39-3665-42d0-b060-5edce02285e8] Pending
helpers_test.go:343: "sp-pod" [9e1abf39-3665-42d0-b060-5edce02285e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [9e1abf39-3665-42d0-b060-5edce02285e8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.016049023s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220126184901-2083 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220126184901-2083 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220126184901-2083 delete -f testdata/storage-provisioner/pod.yaml: (1.006662744s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220126184901-2083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [5e0a0042-888a-4efa-a864-a8c8ceb407ec] Pending
helpers_test.go:343: "sp-pod" [5e0a0042-888a-4efa-a864-a8c8ceb407ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [5e0a0042-888a-4efa-a864-a8c8ceb407ec] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.007536544s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220126184901-2083 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1576: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1593: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh -n functional-20220126184901-2083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 cp functional-20220126184901-2083:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_test1097618731/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh -n functional-20220126184901-2083 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20220126184901-2083 replace --force -f testdata/mysql.yaml
functional_test.go:1651: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-gz248" [b5803721-e09d-42e8-8e5d-305c3a4597e8] Pending
helpers_test.go:343: "mysql-b87c45988-gz248" [b5803721-e09d-42e8-8e5d-305c3a4597e8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-gz248" [b5803721-e09d-42e8-8e5d-305c3a4597e8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1651: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.013792569s
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220126184901-2083 exec mysql-b87c45988-gz248 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220126184901-2083 exec mysql-b87c45988-gz248 -- mysql -ppassword -e "show databases;": exit status 1 (175.395889ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0126 18:53:11.861147    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220126184901-2083 exec mysql-b87c45988-gz248 -- mysql -ppassword -e "show databases;"
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220126184901-2083 exec mysql-b87c45988-gz248 -- mysql -ppassword -e "show databases;": exit status 1 (134.346262ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220126184901-2083 exec mysql-b87c45988-gz248 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1781: Checking for existence of /etc/test/nested/copy/2083/hosts within VM
functional_test.go:1783: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /etc/test/nested/copy/2083/hosts"
functional_test.go:1788: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1824: Checking for existence of /etc/ssl/certs/2083.pem within VM
functional_test.go:1825: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /etc/ssl/certs/2083.pem"
functional_test.go:1824: Checking for existence of /usr/share/ca-certificates/2083.pem within VM
functional_test.go:1825: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /usr/share/ca-certificates/2083.pem"
functional_test.go:1824: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1825: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1851: Checking for existence of /etc/ssl/certs/20832.pem within VM
functional_test.go:1852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /etc/ssl/certs/20832.pem"
functional_test.go:1851: Checking for existence of /usr/share/ca-certificates/20832.pem within VM
functional_test.go:1852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /usr/share/ca-certificates/20832.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1851: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220126184901-2083 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2125: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 version -o=json --components: (1.332313969s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.2
k8s.gcr.io/kube-proxy:v1.23.2
k8s.gcr.io/kube-controller-manager:v1.23.2
k8s.gcr.io/kube-apiserver:v1.23.2
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220126184901-2083
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/kube-proxy                       | v1.23.2                        | d922ca3da64b3 | 112MB  |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | e1482a24335a6 | 220MB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-20220126184901-2083 | 999523211c02c | 30B    |
| docker.io/library/mysql                     | 5.7                            | 0712d5dc1b147 | 448MB  |
| docker.io/library/nginx                     | latest                         | c316d5a335a5c | 142MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.2                        | 6114d758d6d16 | 53.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | 7801cfc6d5c07 | 34.4MB |
| docker.io/library/nginx                     | alpine                         | bef258acf10dc | 23.4MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.2                        | 8a0228dd6a683 | 135MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.2                        | 4783639ba7e03 | 125MB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220126184901-2083 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format json:
[{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.2"],"size":"112000000"},{"id":"4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248","repoDigests
":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.2"],"size":"125000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"999523211c02cd41fa265dd529a0fc8fb5a67f174ea98ab8b4ce5e0fd21a4ed1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220126184901-2083"],"size":"30"},{"id":"8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.2"],"size":"135000000"},{"id":"6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4","repoDigests":[],"repoTags":["k8s.g
cr.io/kube-scheduler:v1.23.2"],"size":"53500000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220126184901-2083"],"size":"32900000"},{"id":"0184c1613d
92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls --format yaml:
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.2
size: "112000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
size: "32900000"
- id: 999523211c02cd41fa265dd529a0fc8fb5a67f174ea98ab8b4ce5e0fd21a4ed1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220126184901-2083
size: "30"
- id: 8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.2
size: "135000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "448000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.2
size: "125000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.2
size: "53500000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh pgrep buildkitd: exit status 1 (638.186708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image build -t localhost/my-image:functional-20220126184901-2083 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image build -t localhost/my-image:functional-20220126184901-2083 testdata/build: (2.60093105s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image build -t localhost/my-image:functional-20220126184901-2083 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 307f78f80d05
Removing intermediate container 307f78f80d05
---> 24a95b9983b0
Step 3/3 : ADD content.txt /
---> 5652a23a0e77
Successfully built 5652a23a0e77
Successfully tagged localhost/my-image:functional-20220126184901-2083
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.104230121s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083: (3.309902217s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1971: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1971: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1971: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083: (2.40161151s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083: (4.548861633s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image save gcr.io/google-containers/addon-resizer:functional-20220126184901-2083 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image save gcr.io/google-containers/addon-resizer:functional-20220126184901-2083 /Users/jenkins/workspace/addon-resizer-save.tar: (2.041050518s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image rm gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.710997469s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220126184901-2083 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220126184901-2083: (2.594308448s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1277: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1312: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1317: Took "672.384968ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1326: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1331: Took "79.399099ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1363: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1368: Took "734.416189ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1376: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1381: Took "98.325328ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220126184901-2083 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220126184901-2083 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [f2ed80bc-c827-40c6-b000-d3d66a0a6eca] Pending
helpers_test.go:343: "nginx-svc" [f2ed80bc-c827-40c6-b000-d3d66a0a6eca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [f2ed80bc-c827-40c6-b000-d3d66a0a6eca] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007662509s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220126184901-2083 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220126184901-2083 tunnel --alsologtostderr] ...
helpers_test.go:501: unable to terminate pid 4680: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220126184901-2083 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3062987692:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1643252020965196000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3062987692/created-by-test
functional_test_mount_test.go:110: wrote "test-1643252020965196000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3062987692/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1643252020965196000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3062987692/test-1643252020965196000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (688.351998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 02:53 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 02:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 02:53 test-1643252020965196000
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh cat /mount-9p/test-1643252020965196000
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220126184901-2083 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [23386fbf-f244-4c7a-8d30-eab5b7123200] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [23386fbf-f244-4c7a-8d30-eab5b7123200] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [23386fbf-f244-4c7a-8d30-eab5b7123200] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.035563382s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220126184901-2083 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh stat /mount-9p/created-by-pod
2022/01/26 18:53:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220126184901-2083 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest3062987692:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220126184901-2083 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1791610984:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (826.872604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220126184901-2083 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1791610984:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh "sudo umount -f /mount-9p": exit status 1 (649.64398ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20220126184901-2083 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220126184901-2083 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1791610984:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.47s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.27s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220126184901-2083
--- PASS: TestFunctional/delete_addon-resizer_images (0.27s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220126184901-2083
--- PASS: TestFunctional/delete_my-image_image (0.12s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220126184901-2083
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (138.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220126185412-2083 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0126 18:55:27.968189    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 18:55:55.700659    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220126185412-2083 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m18.390878107s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (138.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220126185412-2083 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220126185412-2083 addons enable ingress --alsologtostderr -v=5: (15.429364323s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220126185412-2083 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (127.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220126185749-2083 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0126 18:57:54.326557    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.331809    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.341907    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.367776    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.408107    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.491351    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.651790    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:54.972202    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:55.614283    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:56.897065    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:57:59.466283    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:58:04.592243    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:58:14.832903    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:58:35.314684    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 18:59:16.284022    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220126185749-2083 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m7.566188111s)
--- PASS: TestJSONOutput/start/Command (127.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220126185749-2083 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220126185749-2083 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220126185749-2083 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220126185749-2083 --output=json --user=testUser: (18.089461731s)
--- PASS: TestJSONOutput/stop/Command (18.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220126190023-2083 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220126190023-2083 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (121.526991ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"87bf6f49-46fc-4b4a-a363-e88a93953f82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220126190023-2083] minikube v1.25.1 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d1fa72c-2b07-4c3c-94a3-b07ab9435de6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"db93b390-9d4a-4300-bc22-078e023dcfe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig"}}
	{"specversion":"1.0","id":"e9d25e01-7cd6-4e63-ad68-dcaa8450be5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7af4fb51-ed1c-4ad7-91c3-ac311e00eca0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d410b60-bfb6-42b9-a925-d0f0c35b105b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube"}}
	{"specversion":"1.0","id":"fe4bf9b6-595b-436e-9553-8640648bef37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220126190023-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220126190023-2083
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (92.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220126190023-2083 --network=
E0126 19:00:27.959178    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:00:38.222997    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220126190023-2083 --network=: (1m18.997531244s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220126190023-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220126190023-2083
E0126 19:01:46.890480    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:46.895614    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:46.906824    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:46.933329    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:46.979641    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:47.068940    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:47.237662    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:47.567269    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:48.216427    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:49.503746    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:01:52.066397    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220126190023-2083: (13.724513517s)
--- PASS: TestKicCustomNetwork/create_custom_network (92.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (78.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220126190156-2083 --network=bridge
E0126 19:01:57.192183    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:02:07.436059    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:02:27.918627    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:02:54.324882    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220126190156-2083 --network=bridge: (1m8.154196115s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220126190156-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220126190156-2083
E0126 19:03:08.884562    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220126190156-2083: (9.829130819s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (78.10s)

                                                
                                    
x
+
TestKicExistingNetwork (93.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220126190320-2083 --network=existing-network
E0126 19:03:22.065771    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 19:04:30.805196    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220126190320-2083 --network=existing-network: (1m13.453354065s)
helpers_test.go:176: Cleaning up "existing-network-20220126190320-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220126190320-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220126190320-2083: (13.614365809s)
--- PASS: TestKicExistingNetwork (93.09s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (49.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220126190448-2083 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0126 19:05:27.950891    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220126190448-2083 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (48.305914369s)
--- PASS: TestMountStart/serial/StartWithMountFirst (49.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220126190448-2083 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (49.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220126190448-2083 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220126190448-2083 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (48.305525708s)
--- PASS: TestMountStart/serial/StartWithMountSecond (49.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220126190448-2083 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.61s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (12.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220126190448-2083 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220126190448-2083 --alsologtostderr -v=5: (12.579627241s)
--- PASS: TestMountStart/serial/DeleteFirst (12.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220126190448-2083 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.61s)

                                                
                                    
x
+
TestMountStart/serial/Stop (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220126190448-2083
E0126 19:06:46.880916    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
mount_start_test.go:156: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220126190448-2083: (7.791771056s)
--- PASS: TestMountStart/serial/Stop (7.79s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (31.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220126190448-2083
E0126 19:06:51.043612    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:07:14.633049    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
mount_start_test.go:167: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220126190448-2083: (30.29265029s)
--- PASS: TestMountStart/serial/RestartStopped (31.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220126190448-2083 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (237.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220126190733-2083 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0126 19:07:54.302157    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 19:10:27.941302    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220126190733-2083 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m56.627819168s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr: (1.109253605s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (237.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.96612683s)
multinode_test.go:491: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- rollout status deployment/busybox: (2.902432544s)
multinode_test.go:497: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-clfp2 -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-w5bjj -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-clfp2 -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-w5bjj -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-clfp2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-w5bjj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-clfp2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-clfp2 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-w5bjj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220126190733-2083 -- exec busybox-7978565885-w5bjj -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (120.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220126190733-2083 -v 3 --alsologtostderr
E0126 19:11:46.875753    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:12:54.305402    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220126190733-2083 -v 3 --alsologtostderr: (1m58.482620463s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr: (1.564738392s)
--- PASS: TestMultiNode/serial/AddNode (120.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (22.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --output json --alsologtostderr: (1.56859052s)
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp testdata/cp-test.txt multinode-20220126190733-2083:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test3809805052/cp-test_multinode-20220126190733-2083.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083:/home/docker/cp-test.txt multinode-20220126190733-2083-m02:/home/docker/cp-test_multinode-20220126190733-2083_multinode-20220126190733-2083-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m02 "sudo cat /home/docker/cp-test_multinode-20220126190733-2083_multinode-20220126190733-2083-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083:/home/docker/cp-test.txt multinode-20220126190733-2083-m03:/home/docker/cp-test_multinode-20220126190733-2083_multinode-20220126190733-2083-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m03 "sudo cat /home/docker/cp-test_multinode-20220126190733-2083_multinode-20220126190733-2083-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp testdata/cp-test.txt multinode-20220126190733-2083-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test3809805052/cp-test_multinode-20220126190733-2083-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083-m02:/home/docker/cp-test.txt multinode-20220126190733-2083:/home/docker/cp-test_multinode-20220126190733-2083-m02_multinode-20220126190733-2083.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083 "sudo cat /home/docker/cp-test_multinode-20220126190733-2083-m02_multinode-20220126190733-2083.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083-m02:/home/docker/cp-test.txt multinode-20220126190733-2083-m03:/home/docker/cp-test_multinode-20220126190733-2083-m02_multinode-20220126190733-2083-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m03 "sudo cat /home/docker/cp-test_multinode-20220126190733-2083-m02_multinode-20220126190733-2083-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp testdata/cp-test.txt multinode-20220126190733-2083-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test3809805052/cp-test_multinode-20220126190733-2083-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083-m03:/home/docker/cp-test.txt multinode-20220126190733-2083:/home/docker/cp-test_multinode-20220126190733-2083-m03_multinode-20220126190733-2083.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083 "sudo cat /home/docker/cp-test_multinode-20220126190733-2083-m03_multinode-20220126190733-2083.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 cp multinode-20220126190733-2083-m03:/home/docker/cp-test.txt multinode-20220126190733-2083-m02:/home/docker/cp-test_multinode-20220126190733-2083-m03_multinode-20220126190733-2083-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 ssh -n multinode-20220126190733-2083-m02 "sudo cat /home/docker/cp-test_multinode-20220126190733-2083-m03_multinode-20220126190733-2083-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (22.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (12.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 node stop m03: (9.967496833s)
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status: exit status 7 (1.236878455s)

                                                
                                                
-- stdout --
	multinode-20220126190733-2083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220126190733-2083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220126190733-2083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr: exit status 7 (1.242462808s)

                                                
                                                
-- stdout --
	multinode-20220126190733-2083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220126190733-2083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220126190733-2083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0126 19:14:13.072917    8919 out.go:297] Setting OutFile to fd 1 ...
	I0126 19:14:13.073048    8919 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:14:13.073053    8919 out.go:310] Setting ErrFile to fd 2...
	I0126 19:14:13.073056    8919 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:14:13.073143    8919 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 19:14:13.073336    8919 out.go:304] Setting JSON to false
	I0126 19:14:13.073355    8919 mustload.go:65] Loading cluster: multinode-20220126190733-2083
	I0126 19:14:13.073643    8919 config.go:176] Loaded profile config "multinode-20220126190733-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:14:13.073660    8919 status.go:253] checking status of multinode-20220126190733-2083 ...
	I0126 19:14:13.074039    8919 cli_runner.go:133] Run: docker container inspect multinode-20220126190733-2083 --format={{.State.Status}}
	I0126 19:14:13.193051    8919 status.go:328] multinode-20220126190733-2083 host status = "Running" (err=<nil>)
	I0126 19:14:13.193094    8919 host.go:66] Checking if "multinode-20220126190733-2083" exists ...
	I0126 19:14:13.193449    8919 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220126190733-2083
	I0126 19:14:13.312840    8919 host.go:66] Checking if "multinode-20220126190733-2083" exists ...
	I0126 19:14:13.313105    8919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0126 19:14:13.313177    8919 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220126190733-2083
	I0126 19:14:13.432776    8919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59447 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/multinode-20220126190733-2083/id_rsa Username:docker}
	I0126 19:14:13.524841    8919 ssh_runner.go:195] Run: systemctl --version
	I0126 19:14:13.529617    8919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0126 19:14:13.539056    8919 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220126190733-2083
	I0126 19:14:13.657266    8919 kubeconfig.go:92] found "multinode-20220126190733-2083" server: "https://127.0.0.1:59446"
	I0126 19:14:13.657293    8919 api_server.go:165] Checking apiserver status ...
	I0126 19:14:13.657347    8919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0126 19:14:13.673268    8919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1724/cgroup
	I0126 19:14:13.681400    8919 api_server.go:181] apiserver freezer: "7:freezer:/docker/d009f2d17b74b0588dbe6b50e00190e39c62ba8bfb23c2b3cf448af9b81080ae/kubepods/burstable/pod03348984083dfd4888109612fcc587cc/8b8521fc91323d85d44f48e5c185336f6a5dd27a83a438b856f45ce86e2eb360"
	I0126 19:14:13.681462    8919 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d009f2d17b74b0588dbe6b50e00190e39c62ba8bfb23c2b3cf448af9b81080ae/kubepods/burstable/pod03348984083dfd4888109612fcc587cc/8b8521fc91323d85d44f48e5c185336f6a5dd27a83a438b856f45ce86e2eb360/freezer.state
	I0126 19:14:13.688725    8919 api_server.go:203] freezer state: "THAWED"
	I0126 19:14:13.688752    8919 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59446/healthz ...
	I0126 19:14:13.695843    8919 api_server.go:266] https://127.0.0.1:59446/healthz returned 200:
	ok
	I0126 19:14:13.695856    8919 status.go:419] multinode-20220126190733-2083 apiserver status = Running (err=<nil>)
	I0126 19:14:13.695864    8919 status.go:255] multinode-20220126190733-2083 status: &{Name:multinode-20220126190733-2083 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0126 19:14:13.695879    8919 status.go:253] checking status of multinode-20220126190733-2083-m02 ...
	I0126 19:14:13.696167    8919 cli_runner.go:133] Run: docker container inspect multinode-20220126190733-2083-m02 --format={{.State.Status}}
	I0126 19:14:13.814138    8919 status.go:328] multinode-20220126190733-2083-m02 host status = "Running" (err=<nil>)
	I0126 19:14:13.814172    8919 host.go:66] Checking if "multinode-20220126190733-2083-m02" exists ...
	I0126 19:14:13.814481    8919 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220126190733-2083-m02
	I0126 19:14:13.931393    8919 host.go:66] Checking if "multinode-20220126190733-2083-m02" exists ...
	I0126 19:14:13.931655    8919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0126 19:14:13.931713    8919 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220126190733-2083-m02
	I0126 19:14:14.050231    8919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59780 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/multinode-20220126190733-2083-m02/id_rsa Username:docker}
	I0126 19:14:14.142913    8919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0126 19:14:14.152273    8919 status.go:255] multinode-20220126190733-2083-m02 status: &{Name:multinode-20220126190733-2083-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0126 19:14:14.152316    8919 status.go:253] checking status of multinode-20220126190733-2083-m03 ...
	I0126 19:14:14.152641    8919 cli_runner.go:133] Run: docker container inspect multinode-20220126190733-2083-m03 --format={{.State.Status}}
	I0126 19:14:14.272637    8919 status.go:328] multinode-20220126190733-2083-m03 host status = "Stopped" (err=<nil>)
	I0126 19:14:14.272686    8919 status.go:341] host is not running, skipping remaining checks
	I0126 19:14:14.272694    8919 status.go:255] multinode-20220126190733-2083-m03 status: &{Name:multinode-20220126190733-2083-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (12.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 node start m03 --alsologtostderr
E0126 19:14:17.408291    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 node start m03 --alsologtostderr: (50.099927405s)
multinode_test.go:266: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status
multinode_test.go:266: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status: (1.595685596s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (51.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (264.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220126190733-2083
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220126190733-2083
E0126 19:15:27.942154    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220126190733-2083: (42.87584473s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220126190733-2083 --wait=true -v=8 --alsologtostderr
E0126 19:16:46.869845    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:17:54.296251    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 19:18:09.984859    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220126190733-2083 --wait=true -v=8 --alsologtostderr: (3m41.070916732s)
multinode_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220126190733-2083
--- PASS: TestMultiNode/serial/RestartKeepsNodes (264.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (17.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 node delete m03: (14.558776118s)
multinode_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr: (1.122491383s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:429: (dbg) Done: kubectl get nodes: (1.745408668s)
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (17.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 stop
multinode_test.go:319: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 stop: (25.29444819s)
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status: exit status 7 (272.041264ms)

                                                
                                                
-- stdout --
	multinode-20220126190733-2083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220126190733-2083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr: exit status 7 (269.322825ms)

                                                
                                                
-- stdout --
	multinode-20220126190733-2083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220126190733-2083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0126 19:20:13.356123    9789 out.go:297] Setting OutFile to fd 1 ...
	I0126 19:20:13.356245    9789 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:20:13.356249    9789 out.go:310] Setting ErrFile to fd 2...
	I0126 19:20:13.356252    9789 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0126 19:20:13.356321    9789 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0126 19:20:13.356484    9789 out.go:304] Setting JSON to false
	I0126 19:20:13.356498    9789 mustload.go:65] Loading cluster: multinode-20220126190733-2083
	I0126 19:20:13.356755    9789 config.go:176] Loaded profile config "multinode-20220126190733-2083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0126 19:20:13.356768    9789 status.go:253] checking status of multinode-20220126190733-2083 ...
	I0126 19:20:13.357115    9789 cli_runner.go:133] Run: docker container inspect multinode-20220126190733-2083 --format={{.State.Status}}
	I0126 19:20:13.470266    9789 status.go:328] multinode-20220126190733-2083 host status = "Stopped" (err=<nil>)
	I0126 19:20:13.470291    9789 status.go:341] host is not running, skipping remaining checks
	I0126 19:20:13.470299    9789 status.go:255] multinode-20220126190733-2083 status: &{Name:multinode-20220126190733-2083 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0126 19:20:13.470328    9789 status.go:253] checking status of multinode-20220126190733-2083-m02 ...
	I0126 19:20:13.470623    9789 cli_runner.go:133] Run: docker container inspect multinode-20220126190733-2083-m02 --format={{.State.Status}}
	I0126 19:20:13.584789    9789 status.go:328] multinode-20220126190733-2083-m02 host status = "Stopped" (err=<nil>)
	I0126 19:20:13.584811    9789 status.go:341] host is not running, skipping remaining checks
	I0126 19:20:13.584815    9789 status.go:255] multinode-20220126190733-2083-m02 status: &{Name:multinode-20220126190733-2083-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (151.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220126190733-2083 --wait=true -v=8 --alsologtostderr --driver=docker 
E0126 19:20:27.936551    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:21:46.870962    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220126190733-2083 --wait=true -v=8 --alsologtostderr --driver=docker : (2m28.264349559s)
multinode_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220126190733-2083 status --alsologtostderr: (1.136741636s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:379: (dbg) Done: kubectl get nodes: (1.738309013s)
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (151.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (104.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220126190733-2083
multinode_test.go:457: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220126190733-2083-m02 --driver=docker 
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220126190733-2083-m02 --driver=docker : exit status 14 (340.536794ms)

                                                
                                                
-- stdout --
	* [multinode-20220126190733-2083-m02] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220126190733-2083-m02' is duplicated with machine name 'multinode-20220126190733-2083-m02' in profile 'multinode-20220126190733-2083'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220126190733-2083-m03 --driver=docker 
E0126 19:22:54.303092    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 19:23:31.047299    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220126190733-2083-m03 --driver=docker : (1m27.447784903s)
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220126190733-2083
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220126190733-2083: exit status 80 (611.026367ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220126190733-2083
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220126190733-2083-m03 already exists in multinode-20220126190733-2083-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220126190733-2083-m03
multinode_test.go:477: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220126190733-2083-m03: (16.347681755s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (104.79s)

                                                
                                    
x
+
TestPreload (236.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220126192452-2083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0126 19:25:27.941021    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:26:46.867604    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220126192452-2083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m31.251941084s)
preload_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220126192452-2083 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20220126192452-2083 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.300199093s)
preload_test.go:72: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220126192452-2083 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
E0126 19:27:54.292774    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220126192452-2083 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (1m8.974312999s)
preload_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220126192452-2083 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20220126192452-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220126192452-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220126192452-2083: (13.611095378s)
--- PASS: TestPreload (236.80s)

                                                
                                    
x
+
TestScheduledStopUnix (156.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220126192849-2083 --memory=2048 --driver=docker 
scheduled_stop_test.go:129: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220126192849-2083 --memory=2048 --driver=docker : (1m17.138799605s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220126192849-2083 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220126192849-2083 -n scheduled-stop-20220126192849-2083
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220126192849-2083 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220126192849-2083 --cancel-scheduled
E0126 19:30:27.933019    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220126192849-2083 -n scheduled-stop-20220126192849-2083
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220126192849-2083
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220126192849-2083 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E0126 19:30:57.406059    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220126192849-2083
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220126192849-2083: exit status 7 (157.137324ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220126192849-2083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220126192849-2083 -n scheduled-stop-20220126192849-2083
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220126192849-2083 -n scheduled-stop-20220126192849-2083: exit status 7 (154.891499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220126192849-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220126192849-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220126192849-2083: (6.260345528s)
--- PASS: TestScheduledStopUnix (156.11s)

                                                
                                    
x
+
TestInsufficientStorage (66.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220126193125-2083 --memory=2048 --output=json --wait=true --driver=docker 
E0126 19:31:46.865088    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220126193125-2083 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (52.73341683s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43e68fd1-67c5-4e24-ac67-8db3e92c63df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220126193125-2083] minikube v1.25.1 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc6b9edf-dcb0-422d-a10a-9edf4f3ae7af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"99443dcd-ac42-481d-b10d-09f26be55768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig"}}
	{"specversion":"1.0","id":"8403ad18-1ddf-4b13-8b9e-9ab5597e46aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"b9a78dea-e9b3-4829-afa1-b509cf011b5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d6b53020-c912-4789-958f-adaff419f176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube"}}
	{"specversion":"1.0","id":"7799b13b-29c1-48ff-9e3e-f8d2c795a384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"22c456ba-44db-4bcf-979a-5aa7565bd0fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"58eeb0a0-c250-415d-b7a1-3f34832f3b56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220126193125-2083 in cluster insufficient-storage-20220126193125-2083","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3dae2a14-acdb-4d14-a2b2-8107a5034333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9c767bb-173d-41c8-acfa-33b53d5b6b18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"34cb22e7-e19a-4bfe-98cd-c37bd7476662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220126193125-2083 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220126193125-2083 --output=json --layout=cluster: exit status 7 (611.470262ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220126193125-2083","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220126193125-2083","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0126 19:32:19.122493   11567 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220126193125-2083" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220126193125-2083 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220126193125-2083 --output=json --layout=cluster: exit status 7 (605.931572ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220126193125-2083","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220126193125-2083","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0126 19:32:19.728696   11584 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220126193125-2083" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	E0126 19:32:19.740203   11584 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/insufficient-storage-20220126193125-2083/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220126193125-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220126193125-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220126193125-2083: (12.664568843s)
--- PASS: TestInsufficientStorage (66.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3655503411.exe start -p running-upgrade-20220126193859-2083 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3655503411.exe start -p running-upgrade-20220126193859-2083 --memory=2200 --vm-driver=docker : (1m32.400714174s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20220126193859-2083 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0126 19:41:46.925905    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20220126193859-2083 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m30.333168863s)
helpers_test.go:176: Cleaning up "running-upgrade-20220126193859-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220126193859-2083

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220126193859-2083: (6.954053316s)
--- PASS: TestRunningBinaryUpgrade (191.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (219.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0126 19:35:27.934779    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : (1m28.660225852s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220126193519-2083
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220126193519-2083: (16.354294624s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220126193519-2083 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220126193519-2083 status --format={{.Host}}: exit status 7 (155.178285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker : (59.631930776s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220126193519-2083 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (367.6281ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220126193519-2083] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.3-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220126193519-2083
	    minikube start -p kubernetes-upgrade-20220126193519-2083 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220126193519-20832 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.3-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220126193519-2083 --kubernetes-version=v1.23.3-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220126193519-2083 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker : (40.198335826s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220126193519-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220126193519-2083

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220126193519-2083: (14.012407901s)
--- PASS: TestKubernetesUpgrade (219.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1618391961.exe start -p missing-upgrade-20220126193433-2083 --memory=2200 --driver=docker 
E0126 19:34:49.979958    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1618391961.exe start -p missing-upgrade-20220126193433-2083 --memory=2200 --driver=docker : (55.825064069s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220126193433-2083
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220126193433-2083: (14.764354457s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220126193433-2083
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20220126193433-2083 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0126 19:36:46.925644    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20220126193433-2083 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m32.546093032s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220126193433-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220126193433-2083
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220126193433-2083: (5.211100949s)
--- PASS: TestMissingContainerUpgrade (169.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (472.871193ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220126193232-2083] minikube v1.25.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13251
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (67.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --driver=docker 
E0126 19:32:54.292486    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
no_kubernetes_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --driver=docker : (1m6.283661325s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220126193232-2083 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (67.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --no-kubernetes --driver=docker 
no_kubernetes_test.go:113: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --no-kubernetes --driver=docker : (14.834557878s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220126193232-2083 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220126193232-2083 status -o json: exit status 2 (636.061645ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220126193232-2083","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220126193232-2083
no_kubernetes_test.go:125: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220126193232-2083: (15.189946364s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --no-kubernetes --driver=docker : (40.020002419s)
--- PASS: TestNoKubernetes/serial/Start (40.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220126193232-2083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220126193232-2083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (772.684773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:170: (dbg) Done: out/minikube-darwin-amd64 profile list: (1.196623862s)
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:180: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (1.134290415s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220126193232-2083
no_kubernetes_test.go:159: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220126193232-2083: (1.950870758s)
--- PASS: TestNoKubernetes/serial/Stop (1.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (13.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --driver=docker 
no_kubernetes_test.go:192: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220126193232-2083 --driver=docker : (13.650171873s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (13.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220126193232-2083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220126193232-2083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (632.784197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (151.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3270132053.exe start -p stopped-upgrade-20220126193723-2083 --memory=2200 --vm-driver=docker 
E0126 19:37:54.352542    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3270132053.exe start -p stopped-upgrade-20220126193723-2083 --memory=2200 --vm-driver=docker : (1m30.24337785s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3270132053.exe -p stopped-upgrade-20220126193723-2083 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3270132053.exe -p stopped-upgrade-20220126193723-2083 stop: (17.09217721s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20220126193723-2083 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20220126193723-2083 --memory=2200 --alsologtostderr -v=1 --driver=docker : (43.692535784s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (151.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220126193723-2083
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220126193723-2083: (2.268856309s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.27s)

                                                
                                    
x
+
TestPause/serial/Start (117.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:82: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220126194007-2083 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0126 19:40:11.112710    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:40:27.994929    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:82: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220126194007-2083 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m57.227564314s)
--- PASS: TestPause/serial/Start (117.23s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220126194007-2083 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220126194007-2083 --alsologtostderr -v=1 --driver=docker : (7.614826897s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220126194007-2083 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220126194007-2083 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220126194007-2083 --output=json --layout=cluster: exit status 2 (640.333725ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220126194007-2083","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220126194007-2083","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20220126194007-2083 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220126194007-2083 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (17.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20220126194007-2083 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20220126194007-2083 --alsologtostderr -v=5: (17.796557476s)
--- PASS: TestPause/serial/DeletePaused (17.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:144: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:144: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (3.276099306s)
pause_test.go:170: (dbg) Run:  docker ps -a
pause_test.go:175: (dbg) Run:  docker volume inspect pause-20220126194007-2083
pause_test.go:175: (dbg) Non-zero exit: docker volume inspect pause-20220126194007-2083: exit status 1 (128.835647ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220126194007-2083

                                                
                                                
** /stderr **
pause_test.go:180: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.25.1 on darwin
- MINIKUBE_LOCATION=13251
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4026449289
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4026449289/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4026449289/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4026449289/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.06s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (14.75s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.25.1 on darwin
- MINIKUBE_LOCATION=13251
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current4073024338
* Using the hyperkit driver based on user configuration
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (14.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220126194707-2083 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0126 19:47:37.474238    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220126194707-2083 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (2m43.614699615s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (135.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220126194754-2083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220126194754-2083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0: (2m15.936495953s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (135.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220126194707-2083 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context old-k8s-version-20220126194707-2083 create -f testdata/busybox.yaml: (2.02343638s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [02d0c5ac-359a-448a-abb3-b3b1456a6bab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [02d0c5ac-359a-448a-abb3-b3b1456a6bab] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.014731331s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220126194707-2083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220126194707-2083 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220126194707-2083 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (18.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220126194707-2083 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220126194707-2083 --alsologtostderr -v=3: (18.376830462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (18.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220126194754-2083 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20220126194754-2083 create -f testdata/busybox.yaml: (1.889930924s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [3549d684-5a21-42b8-84dc-ad93b7b39db7] Pending
helpers_test.go:343: "busybox" [3549d684-5a21-42b8-84dc-ad93b7b39db7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [3549d684-5a21-42b8-84dc-ad93b7b39db7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.020189251s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220126194754-2083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083: exit status 7 (213.418925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220126194707-2083 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (123.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220126194707-2083 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220126194707-2083 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (2m3.102469712s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (123.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220126194754-2083 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220126194754-2083 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220126194754-2083 --alsologtostderr -v=3
E0126 19:50:27.995952    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220126194754-2083 --alsologtostderr -v=3: (12.496665601s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083: exit status 7 (170.276694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220126194754-2083 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (103.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220126194754-2083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0
E0126 19:51:30.046361    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
E0126 19:51:46.928631    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220126194754-2083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0: (1m42.835166172s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (103.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-7dqm9" [6ea42307-5eba-47d9-b440-65547bb10def] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015360229s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-7dqm9" [6ea42307-5eba-47d9-b440-65547bb10def] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010792026s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220126194754-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:276: (dbg) Done: kubectl --context no-preload-20220126194754-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.884882184s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-6s428" [5f6313af-b81d-4a67-8989-5696e2a76fa2] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01481289s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-6s428" [5f6313af-b81d-4a67-8989-5696e2a76fa2] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014263504s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220126194707-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:276: (dbg) Done: kubectl --context old-k8s-version-20220126194707-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.153504685s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220126194754-2083 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220126194754-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083: exit status 2 (649.079548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083: exit status 2 (654.668458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220126194754-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220126194754-2083 -n no-preload-20220126194754-2083: (1.146974886s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20220126194707-2083 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20220126194707-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083: exit status 2 (646.491253ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083: exit status 2 (648.593641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-20220126194707-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220126194707-2083 -n old-k8s-version-20220126194707-2083
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220126195254-2083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220126195254-2083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2: (1m9.875550186s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (113.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220126195259-2083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220126195259-2083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2: (1m53.327122123s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (113.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220126195254-2083 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20220126195254-2083 create -f testdata/busybox.yaml: (1.914633841s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [be130bcc-bf2f-43b0-bafa-ae96dbceacec] Pending
helpers_test.go:343: "busybox" [be130bcc-bf2f-43b0-bafa-ae96dbceacec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [be130bcc-bf2f-43b0-bafa-ae96dbceacec] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.017841708s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220126195254-2083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220126195254-2083 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220126195254-2083 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220126195254-2083 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220126195254-2083 --alsologtostderr -v=3: (16.767943351s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083: exit status 7 (160.785326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220126195254-2083 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (103.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220126195254-2083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220126195254-2083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2: (1m42.346241931s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (103.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220126195259-2083 create -f testdata/busybox.yaml
E0126 19:54:52.946070    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:52.951201    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:52.961371    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:52.988359    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:53.033387    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:53.121292    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:53.285486    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:53.612511    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:54:54.259227    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20220126195259-2083 create -f testdata/busybox.yaml: (1.880016715s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e2b02cc9-5516-47b4-a9f2-3af097823b78] Pending
E0126 19:54:55.539897    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
helpers_test.go:343: "busybox" [e2b02cc9-5516-47b4-a9f2-3af097823b78] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [e2b02cc9-5516-47b4-a9f2-3af097823b78] Running
E0126 19:54:58.104383    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.019966567s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220126195259-2083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220126195259-2083 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0126 19:55:03.224572    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220126195259-2083 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (14.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220126195259-2083 --alsologtostderr -v=3
E0126 19:55:12.786138    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:12.791329    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:12.803337    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:12.823566    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:12.868890    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:12.954919    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:13.117266    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:13.437649    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:13.469765    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:55:14.086356    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:15.366546    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:17.935614    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220126195259-2083 --alsologtostderr -v=3: (14.601524419s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (14.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083: exit status 7 (172.787397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220126195259-2083 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (91.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220126195259-2083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2
E0126 19:55:23.065326    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:28.040942    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 19:55:33.307718    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:55:33.957773    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:55:53.789265    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 19:56:14.918614    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220126195259-2083 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2: (1m30.764574844s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (91.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-2zgnp" [0b69965c-aa83-4114-ac2d-83dc3a4c81e4] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020553837s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-2zgnp" [0b69965c-aa83-4114-ac2d-83dc3a4c81e4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011243406s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220126195254-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context embed-certs-20220126195254-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.87432151s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220126195254-2083 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220126195254-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083: exit status 2 (667.931802ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083: exit status 2 (677.283581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220126195254-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220126195254-2083 -n embed-certs-20220126195254-2083
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (68.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220126195647-2083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220126195647-2083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0: (1m8.984931249s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (68.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-gzxfd" [1e47d99c-ec22-4460-b680-397214ee5c1a] Running
E0126 19:56:51.160165    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016766165s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (7.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-gzxfd" [1e47d99c-ec22-4460-b680-397214ee5c1a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010883263s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220126195259-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context default-k8s-different-port-20220126195259-2083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.107864355s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (7.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220126195259-2083 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220126195259-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083: exit status 2 (648.196947ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083: exit status 2 (647.513467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220126195259-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220126195259-2083 -n default-k8s-different-port-20220126195259-2083
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (114.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220126194237-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0126 19:57:36.839753    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:57:54.402630    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
E0126 19:57:56.677325    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220126194237-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (1m54.209579543s)
--- PASS: TestNetworkPlugins/group/auto/Start (114.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220126195647-2083 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220126195647-2083 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220126195647-2083 --alsologtostderr -v=3: (18.079146353s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083: exit status 7 (156.412032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220126195647-2083 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220126195647-2083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220126195647-2083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0: (51.058273773s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220126195647-2083 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220126195647-2083 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083: exit status 2 (673.942402ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083: exit status 2 (662.464243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220126195647-2083 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 unpause -p newest-cni-20220126195647-2083 --alsologtostderr -v=1: (1.072865993s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220126195647-2083 -n newest-cni-20220126195647-2083
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (124.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220126194339-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
E0126 19:59:52.948216    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 19:59:54.763662    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:54.768943    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:54.779064    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:54.799223    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:54.841859    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:54.929919    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:55.099998    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:55.426989    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:56.072929    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:57.362067    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 19:59:59.929758    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 20:00:05.050540    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 20:00:12.789847    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 20:00:15.290995    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 20:00:20.688913    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 20:00:28.043644    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 20:00:35.772585    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 20:00:40.522621    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 20:01:16.733193    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220126194339-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (2m4.584526205s)
--- PASS: TestNetworkPlugins/group/cilium/Start (124.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-n9445" [06bbc6fa-d569-4722-b2fa-862d6f38e6d2] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.02234676s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220126194339-2083 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220126194339-2083 replace --force -f testdata/netcat-deployment.yaml
E0126 20:01:46.983716    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
net_test.go:132: (dbg) Done: kubectl --context cilium-20220126194339-2083 replace --force -f testdata/netcat-deployment.yaml: (2.493278688s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-5g6lb" [f70ad189-dd22-40c5-8273-8592d2ec27ed] Pending
helpers_test.go:343: "netcat-668db85669-5g6lb" [f70ad189-dd22-40c5-8273-8592d2ec27ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-5g6lb" [f70ad189-dd22-40c5-8273-8592d2ec27ed] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.014616928s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220126194339-2083 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220126194339-2083 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220126194339-2083 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (122.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220126194339-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0126 20:02:38.661513    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory
E0126 20:02:54.407710    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220126194339-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (2m2.681399148s)
--- PASS: TestNetworkPlugins/group/calico/Start (122.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-gc724" [c75dcd16-4ca8-4a62-ab31-a75d2baa6f39] Running
E0126 20:04:17.529771    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220126184901-2083/client.crt: no such file or directory
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.013790978s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220126194339-2083 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20220126194339-2083 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context calico-20220126194339-2083 replace --force -f testdata/netcat-deployment.yaml: (2.044770489s)
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-fk9fs" [8c9705f9-9d3f-4042-aba9-6aed226f894b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-fk9fs" [8c9705f9-9d3f-4042-aba9-6aed226f894b] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.010585691s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (348.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220126194238-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0126 20:09:23.575718    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:09:52.956375    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 20:09:54.760985    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220126195259-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220126194238-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (5m48.59492536s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (348.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220126194239-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E0126 20:10:28.050132    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220126184248-2083/client.crt: no such file or directory
E0126 20:11:16.057224    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220126194707-2083/client.crt: no such file or directory
E0126 20:11:35.899584    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220126194754-2083/client.crt: no such file or directory
E0126 20:11:39.711218    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
E0126 20:11:46.984694    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220126185412-2083/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220126194239-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (1m21.520033602s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-sb67b" [28ad51f9-eb07-4e34-9bda-7185f0ad6046] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014254569s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220126194239-2083 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220126194239-2083 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context kindnet-20220126194239-2083 replace --force -f testdata/netcat-deployment.yaml: (1.964644681s)
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-8vq84" [bd35bd6f-0b81-4597-b2c4-ba1af55d9ba7] Pending
helpers_test.go:343: "netcat-668db85669-8vq84" [bd35bd6f-0b81-4597-b2c4-ba1af55d9ba7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-8vq84" [bd35bd6f-0b81-4597-b2c4-ba1af55d9ba7] Running
E0126 20:12:07.425242    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cilium-20220126194339-2083/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006164084s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220126194238-2083 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220126194238-2083 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context enable-default-cni-20220126194238-2083 replace --force -f testdata/netcat-deployment.yaml: (1.96980554s)
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-wp26w" [51749417-a73f-4f92-a20e-b28f3afa537b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0126 20:14:57.900658    2083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13251-927-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/calico-20220126194339-2083/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-668db85669-wp26w" [51749417-a73f-4f92-a20e-b28f3afa537b] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.009489175s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220126194238-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220126194238-2083 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m41.98245486s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220126194238-2083 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220126194238-2083 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context bridge-20220126194238-2083 replace --force -f testdata/netcat-deployment.yaml: (1.876366542s)
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-tkstw" [4a244b0a-d31b-4d9f-8e21-8aa26a80b419] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-668db85669-tkstw" [4a244b0a-d31b-4d9f-8e21-8aa26a80b419] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.010385683s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.91s)

                                                
                                    

Test skip (21/275)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.16.0/preload-exists (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/preload-exists
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.23.2/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/preload-exists (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/preload-exists
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/preload-exists (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 14.79661ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-gqw6q" [2369875d-06b2-4ca4-896d-337a7d922be5] Running
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01151334s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-jdgmh" [31a1de55-a248-4417-9ef4-2c5fa82f38c0] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016724974s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220126184248-2083 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220126184248-2083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220126184248-2083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.702280186s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (12.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220126184248-2083 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (665.924449ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.111.231.76:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (168.864081ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.111.231.76:443: connect: connection refused

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (10.868063247s)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.111.231.76:443: i/o timeout

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220126184248-2083 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [6b4ea2d0-35ed-496a-845c-531ca6291858] Pending
helpers_test.go:343: "nginx" [6b4ea2d0-35ed-496a-845c-531ca6291858] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [6b4ea2d0-35ed-496a-845c-531ca6291858] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008999257s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220126184248-2083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (29.43s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:36: skipping: only runs with docker container runtime, currently testing 
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1431: (dbg) Run:  kubectl --context functional-20220126184901-2083 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1437: (dbg) Run:  kubectl --context functional-20220126184901-2083 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1442: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-8qmzv" [12d3ea63-6ac6-476d-8d86-0e2c45eb2cb2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-8qmzv" [12d3ea63-6ac6-476d-8d86-0e2c45eb2cb2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1442: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.006497737s
functional_test.go:1447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220126184901-2083 service list
functional_test.go:1456: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (12.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing 
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing 
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220126185412-2083 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.407593435s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (202.832107ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.97.89.91:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (166.742356ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.97.89.91:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (157.366831ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.97.89.91:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (186.685311ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.97.89.91:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (10.155631186s)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.97.89.91:443: i/o timeout

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220126185412-2083 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [f6e9e277-8444-42b1-8506-46d4c3264e6b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [f6e9e277-8444-42b1-8506-46d4c3264e6b] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.007399271s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220126185412-2083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.56s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing  container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220126195258-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220126195258-2083
--- SKIP: TestStartStop/group/disable-driver-mounts (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:89: Skipping the test as  container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20220126194237-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubenet-20220126194237-2083
--- SKIP: TestNetworkPlugins/group/kubenet (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220126194238-2083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220126194238-2083
--- SKIP: TestNetworkPlugins/group/flannel (0.93s)

                                                
                                    
Copied to clipboard